May 15 10:10:01.744020 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 10:10:01.744041 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu May 15 09:09:56 -00 2025 May 15 10:10:01.744054 kernel: efi: EFI v2.70 by EDK II May 15 10:10:01.744060 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 15 10:10:01.744065 kernel: random: crng init done May 15 10:10:01.744071 kernel: ACPI: Early table checksum verification disabled May 15 10:10:01.744077 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 15 10:10:01.744084 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 10:10:01.744089 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:10:01.744095 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:10:01.744100 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:10:01.744105 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:10:01.744110 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:10:01.744116 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:10:01.744124 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:10:01.744130 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:10:01.744135 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:10:01.744141 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 10:10:01.744147 kernel: NUMA: Failed to initialise from firmware May 15 10:10:01.744152 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 10:10:01.744158 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] May 15 10:10:01.744164 kernel: Zone ranges: May 15 10:10:01.744170 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 10:10:01.744176 kernel: DMA32 empty May 15 10:10:01.744182 kernel: Normal empty May 15 10:10:01.744190 kernel: Movable zone start for each node May 15 10:10:01.744196 kernel: Early memory node ranges May 15 10:10:01.744202 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 15 10:10:01.744208 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 15 10:10:01.744224 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 15 10:10:01.744244 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 15 10:10:01.744249 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 15 10:10:01.744255 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 15 10:10:01.744260 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 15 10:10:01.744266 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 10:10:01.744273 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 10:10:01.744279 kernel: psci: probing for conduit method from ACPI. May 15 10:10:01.744284 kernel: psci: PSCIv1.1 detected in firmware. May 15 10:10:01.744290 kernel: psci: Using standard PSCI v0.2 function IDs May 15 10:10:01.744296 kernel: psci: Trusted OS migration not required May 15 10:10:01.744304 kernel: psci: SMC Calling Convention v1.1 May 15 10:10:01.744310 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 10:10:01.744317 kernel: ACPI: SRAT not present May 15 10:10:01.744323 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 15 10:10:01.744329 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 15 10:10:01.744335 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 10:10:01.744341 kernel: Detected PIPT I-cache on CPU0 May 15 10:10:01.744347 kernel: CPU features: detected: GIC system register CPU interface May 15 10:10:01.744353 kernel: CPU features: detected: Hardware dirty bit management May 15 10:10:01.744367 kernel: CPU features: detected: Spectre-v4 May 15 10:10:01.744373 kernel: CPU features: detected: Spectre-BHB May 15 10:10:01.744381 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 10:10:01.744388 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 10:10:01.744394 kernel: CPU features: detected: ARM erratum 1418040 May 15 10:10:01.744399 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 10:10:01.744406 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 10:10:01.744412 kernel: Policy zone: DMA May 15 10:10:01.744419 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=aa29d2e9841b6b978238db9eff73afa5af149616ae25608914babb265d82dda7 May 15 10:10:01.744425 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 10:10:01.744431 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 10:10:01.744437 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 10:10:01.744443 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 10:10:01.744451 kernel: Memory: 2457400K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 114888K reserved, 0K cma-reserved) May 15 10:10:01.744457 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 10:10:01.744463 kernel: trace event string verifier disabled May 15 10:10:01.744469 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 10:10:01.744475 kernel: rcu: RCU event tracing is enabled. May 15 10:10:01.744481 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 10:10:01.744488 kernel: Trampoline variant of Tasks RCU enabled. May 15 10:10:01.744494 kernel: Tracing variant of Tasks RCU enabled. May 15 10:10:01.744500 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 10:10:01.744506 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 10:10:01.744512 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 10:10:01.744519 kernel: GICv3: 256 SPIs implemented May 15 10:10:01.744525 kernel: GICv3: 0 Extended SPIs implemented May 15 10:10:01.744531 kernel: GICv3: Distributor has no Range Selector support May 15 10:10:01.744537 kernel: Root IRQ handler: gic_handle_irq May 15 10:10:01.744543 kernel: GICv3: 16 PPIs implemented May 15 10:10:01.744549 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 10:10:01.744555 kernel: ACPI: SRAT not present May 15 10:10:01.744561 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 10:10:01.744567 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 15 10:10:01.744573 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 15 10:10:01.744579 kernel: GICv3: using LPI property table @0x00000000400d0000 May 15 10:10:01.744586 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 15 10:10:01.744593 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:10:01.744599 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 10:10:01.744607 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 10:10:01.744613 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 10:10:01.744619 kernel: arm-pv: using stolen time PV May 15 10:10:01.744625 kernel: Console: colour dummy device 80x25 May 15 10:10:01.744631 kernel: ACPI: Core revision 20210730 May 15 10:10:01.744638 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 10:10:01.744645 kernel: pid_max: default: 32768 minimum: 301 May 15 10:10:01.744651 kernel: LSM: Security Framework initializing May 15 10:10:01.744658 kernel: SELinux: Initializing. May 15 10:10:01.744665 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 10:10:01.744671 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 10:10:01.744677 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 10:10:01.744683 kernel: rcu: Hierarchical SRCU implementation. May 15 10:10:01.744690 kernel: Platform MSI: ITS@0x8080000 domain created May 15 10:10:01.744696 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 10:10:01.744702 kernel: Remapping and enabling EFI services. May 15 10:10:01.744708 kernel: smp: Bringing up secondary CPUs ... May 15 10:10:01.744716 kernel: Detected PIPT I-cache on CPU1 May 15 10:10:01.744722 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 10:10:01.744729 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 15 10:10:01.744735 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:10:01.744742 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 10:10:01.744748 kernel: Detected PIPT I-cache on CPU2 May 15 10:10:01.744754 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 10:10:01.744761 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 15 10:10:01.744767 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:10:01.744773 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 10:10:01.744781 kernel: Detected PIPT I-cache on CPU3 May 15 10:10:01.744787 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 10:10:01.744796 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 15 10:10:01.744802 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:10:01.744814 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 10:10:01.744821 kernel: smp: Brought up 1 node, 4 CPUs May 15 10:10:01.744828 kernel: SMP: Total of 4 processors activated. May 15 10:10:01.744835 kernel: CPU features: detected: 32-bit EL0 Support May 15 10:10:01.744842 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 10:10:01.744849 kernel: CPU features: detected: Common not Private translations May 15 10:10:01.744855 kernel: CPU features: detected: CRC32 instructions May 15 10:10:01.744862 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 10:10:01.744870 kernel: CPU features: detected: LSE atomic instructions May 15 10:10:01.744877 kernel: CPU features: detected: Privileged Access Never May 15 10:10:01.744883 kernel: CPU features: detected: RAS Extension Support May 15 10:10:01.744890 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 10:10:01.744897 kernel: CPU: All CPU(s) started at EL1 May 15 10:10:01.744905 kernel: alternatives: patching kernel code May 15 10:10:01.744911 kernel: devtmpfs: initialized May 15 10:10:01.744919 kernel: KASLR enabled May 15 10:10:01.744929 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 10:10:01.744938 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 10:10:01.744944 kernel: pinctrl core: initialized pinctrl subsystem May 15 10:10:01.744954 kernel: SMBIOS 3.0.0 present. May 15 10:10:01.744961 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 15 10:10:01.744968 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 10:10:01.744976 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 10:10:01.744983 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 10:10:01.744990 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 10:10:01.744997 kernel: audit: initializing netlink subsys (disabled) May 15 10:10:01.745003 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 May 15 10:10:01.745010 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 10:10:01.745017 kernel: cpuidle: using governor menu May 15 10:10:01.745023 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 10:10:01.745030 kernel: ASID allocator initialised with 32768 entries May 15 10:10:01.745038 kernel: ACPI: bus type PCI registered May 15 10:10:01.745047 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 10:10:01.745054 kernel: Serial: AMBA PL011 UART driver May 15 10:10:01.745061 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 15 10:10:01.745067 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 15 10:10:01.745074 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 15 10:10:01.745081 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 15 10:10:01.745088 kernel: cryptd: max_cpu_qlen set to 1000 May 15 10:10:01.745094 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 10:10:01.745105 kernel: ACPI: Added _OSI(Module Device) May 15 10:10:01.745112 kernel: ACPI: Added _OSI(Processor Device) May 15 10:10:01.745118 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 10:10:01.745125 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 10:10:01.745131 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 15 10:10:01.745138 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 15 10:10:01.745145 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 15 10:10:01.745151 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 10:10:01.745160 kernel: ACPI: Interpreter enabled May 15 10:10:01.745168 kernel: ACPI: Using GIC for interrupt routing May 15 10:10:01.745174 kernel: ACPI: MCFG table detected, 1 entries May 15 10:10:01.745183 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 10:10:01.745189 kernel: printk: console [ttyAMA0] enabled May 15 10:10:01.745197 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 10:10:01.745341 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 10:10:01.745427 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 10:10:01.745494 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 10:10:01.745586 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 10:10:01.745653 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 10:10:01.745662 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 10:10:01.745705 kernel: PCI host bridge to bus 0000:00 May 15 10:10:01.745786 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 10:10:01.745846 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 10:10:01.745917 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 10:10:01.745980 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 10:10:01.746064 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 10:10:01.746142 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 10:10:01.746210 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 10:10:01.746300 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 10:10:01.746377 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 10:10:01.746443 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 10:10:01.746512 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 10:10:01.746581 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 10:10:01.746636 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 10:10:01.746690 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 10:10:01.746742 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 10:10:01.746751 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 10:10:01.746758 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 10:10:01.746766 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 10:10:01.746773 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 10:10:01.746779 kernel: iommu: Default domain type: Translated May 15 10:10:01.746786 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 10:10:01.746792 kernel: vgaarb: loaded May 15 10:10:01.746799 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 10:10:01.746806 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 10:10:01.746812 kernel: PTP clock support registered May 15 10:10:01.746819 kernel: Registered efivars operations May 15 10:10:01.746826 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 10:10:01.746833 kernel: VFS: Disk quotas dquot_6.6.0 May 15 10:10:01.746840 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 10:10:01.746846 kernel: pnp: PnP ACPI init May 15 10:10:01.746910 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 10:10:01.746920 kernel: pnp: PnP ACPI: found 1 devices May 15 10:10:01.746930 kernel: NET: Registered PF_INET protocol family May 15 10:10:01.746937 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 10:10:01.746945 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 10:10:01.746952 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 10:10:01.746958 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 10:10:01.746965 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 15 10:10:01.746972 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 10:10:01.746978 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 10:10:01.746985 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 10:10:01.746991 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 10:10:01.746998 kernel: PCI: CLS 0 bytes, default 64 May 15 10:10:01.747006 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 10:10:01.747013 kernel: kvm [1]: HYP mode not available May 15 10:10:01.747019 kernel: Initialise system trusted keyrings May 15 10:10:01.747026 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 10:10:01.747033 kernel: Key type asymmetric registered May 15 10:10:01.747039 kernel: Asymmetric key parser 'x509' registered May 15 10:10:01.747045 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 15 10:10:01.747052 kernel: io scheduler mq-deadline registered May 15 10:10:01.747058 kernel: io scheduler kyber registered May 15 10:10:01.747066 kernel: io scheduler bfq registered May 15 10:10:01.747073 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 10:10:01.747079 kernel: ACPI: button: Power Button [PWRB] May 15 10:10:01.747086 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 10:10:01.747146 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 10:10:01.747155 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 10:10:01.747162 kernel: thunder_xcv, ver 1.0 May 15 10:10:01.747168 kernel: thunder_bgx, ver 1.0 May 15 10:10:01.747175 kernel: nicpf, ver 1.0 May 15 10:10:01.747183 kernel: nicvf, ver 1.0 May 15 10:10:01.747278 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 10:10:01.747334 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T10:10:01 UTC (1747303801) May 15 10:10:01.747343 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 10:10:01.747350 kernel: NET: Registered PF_INET6 protocol family May 15 10:10:01.747363 kernel: Segment Routing with IPv6 May 15 10:10:01.747370 kernel: In-situ OAM (IOAM) with IPv6 May 15 10:10:01.747377 kernel: NET: Registered PF_PACKET protocol family May 15 10:10:01.747386 kernel: Key type dns_resolver registered May 15 10:10:01.747392 kernel: registered taskstats version 1 May 15 10:10:01.747399 kernel: Loading compiled-in X.509 certificates May 15 10:10:01.747405 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 3679cbfb4d4756a2ddc177f0eaedea33fb5fdf2e' May 15 10:10:01.747412 kernel: Key type .fscrypt registered May 15 10:10:01.747418 kernel: Key type fscrypt-provisioning registered May 15 10:10:01.747426 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 10:10:01.747433 kernel: ima: Allocated hash algorithm: sha1 May 15 10:10:01.747440 kernel: ima: No architecture policies found May 15 10:10:01.747447 kernel: clk: Disabling unused clocks May 15 10:10:01.747454 kernel: Freeing unused kernel memory: 36416K May 15 10:10:01.747460 kernel: Run /init as init process May 15 10:10:01.747467 kernel: with arguments: May 15 10:10:01.747474 kernel: /init May 15 10:10:01.747483 kernel: with environment: May 15 10:10:01.747489 kernel: HOME=/ May 15 10:10:01.747496 kernel: TERM=linux May 15 10:10:01.747504 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 10:10:01.747514 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 10:10:01.747523 systemd[1]: Detected virtualization kvm. May 15 10:10:01.747531 systemd[1]: Detected architecture arm64. May 15 10:10:01.747538 systemd[1]: Running in initrd. May 15 10:10:01.747544 systemd[1]: No hostname configured, using default hostname. May 15 10:10:01.747551 systemd[1]: Hostname set to . May 15 10:10:01.747561 systemd[1]: Initializing machine ID from VM UUID. May 15 10:10:01.747573 systemd[1]: Queued start job for default target initrd.target. May 15 10:10:01.747580 systemd[1]: Started systemd-ask-password-console.path. May 15 10:10:01.747587 systemd[1]: Reached target cryptsetup.target. May 15 10:10:01.747594 systemd[1]: Reached target paths.target. May 15 10:10:01.747601 systemd[1]: Reached target slices.target. May 15 10:10:01.747608 systemd[1]: Reached target swap.target. May 15 10:10:01.747615 systemd[1]: Reached target timers.target. May 15 10:10:01.747622 systemd[1]: Listening on iscsid.socket. May 15 10:10:01.747631 systemd[1]: Listening on iscsiuio.socket. May 15 10:10:01.747638 systemd[1]: Listening on systemd-journald-audit.socket. May 15 10:10:01.747645 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 10:10:01.747653 systemd[1]: Listening on systemd-journald.socket. May 15 10:10:01.747660 systemd[1]: Listening on systemd-networkd.socket. May 15 10:10:01.747667 systemd[1]: Listening on systemd-udevd-control.socket. May 15 10:10:01.747674 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 10:10:01.747681 systemd[1]: Reached target sockets.target. May 15 10:10:01.747689 systemd[1]: Starting kmod-static-nodes.service... May 15 10:10:01.747697 systemd[1]: Finished network-cleanup.service. May 15 10:10:01.747704 systemd[1]: Starting systemd-fsck-usr.service... May 15 10:10:01.747711 systemd[1]: Starting systemd-journald.service... May 15 10:10:01.747719 systemd[1]: Starting systemd-modules-load.service... May 15 10:10:01.747726 systemd[1]: Starting systemd-resolved.service... May 15 10:10:01.747735 systemd[1]: Starting systemd-vconsole-setup.service... May 15 10:10:01.747742 systemd[1]: Finished kmod-static-nodes.service. May 15 10:10:01.747749 systemd[1]: Finished systemd-fsck-usr.service. May 15 10:10:01.747758 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 10:10:01.747768 systemd-journald[290]: Journal started May 15 10:10:01.747811 systemd-journald[290]: Runtime Journal (/run/log/journal/d8fa276d7d0141b9a11d94b1600e267a) is 6.0M, max 48.7M, 42.6M free. May 15 10:10:01.747842 systemd[1]: Finished systemd-vconsole-setup.service. May 15 10:10:01.739094 systemd-modules-load[291]: Inserted module 'overlay' May 15 10:10:01.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:01.753225 kernel: audit: type=1130 audit(1747303801.749:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:01.753247 systemd[1]: Started systemd-journald.service. May 15 10:10:01.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:01.754768 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 10:10:01.761670 kernel: audit: type=1130 audit(1747303801.754:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:01.761691 kernel: audit: type=1130 audit(1747303801.757:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:01.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:01.759064 systemd[1]: Starting dracut-cmdline-ask.service... May 15 10:10:01.765118 systemd-resolved[292]: Positive Trust Anchors: May 15 10:10:01.765135 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 10:10:01.765164 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 10:10:01.769530 systemd-resolved[292]: Defaulting to hostname 'linux'. May 15 10:10:01.773660 systemd[1]: Started systemd-resolved.service. May 15 10:10:01.779322 kernel: audit: type=1130 audit(1747303801.774:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:01.779346 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 10:10:01.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:01.774559 systemd[1]: Reached target nss-lookup.target. May 15 10:10:01.780912 systemd[1]: Finished dracut-cmdline-ask.service. May 15 10:10:01.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:01.782596 systemd[1]: Starting dracut-cmdline.service... May 15 10:10:01.786905 kernel: audit: type=1130 audit(1747303801.781:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:01.786927 kernel: Bridge firewalling registered May 15 10:10:01.786185 systemd-modules-load[291]: Inserted module 'br_netfilter' May 15 10:10:01.791674 dracut-cmdline[307]: dracut-dracut-053 May 15 10:10:01.793963 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=aa29d2e9841b6b978238db9eff73afa5af149616ae25608914babb265d82dda7 May 15 10:10:01.800241 kernel: SCSI subsystem initialized May 15 10:10:01.809237 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 10:10:01.809287 kernel: device-mapper: uevent: version 1.0.3 May 15 10:10:01.809297 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 15 10:10:01.812119 systemd-modules-load[291]: Inserted module 'dm_multipath' May 15 10:10:01.812966 systemd[1]: Finished systemd-modules-load.service. May 15 10:10:01.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:01.817762 systemd[1]: Starting systemd-sysctl.service... May 15 10:10:01.819654 kernel: audit: type=1130 audit(1747303801.813:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:01.824963 systemd[1]: Finished systemd-sysctl.service. May 15 10:10:01.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:01.829241 kernel: audit: type=1130 audit(1747303801.825:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:01.863235 kernel: Loading iSCSI transport class v2.0-870. May 15 10:10:01.876233 kernel: iscsi: registered transport (tcp) May 15 10:10:01.894239 kernel: iscsi: registered transport (qla4xxx) May 15 10:10:01.894258 kernel: QLogic iSCSI HBA Driver May 15 10:10:01.930799 systemd[1]: Finished dracut-cmdline.service. May 15 10:10:01.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:01.932526 systemd[1]: Starting dracut-pre-udev.service... May 15 10:10:01.936083 kernel: audit: type=1130 audit(1747303801.931:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:01.978245 kernel: raid6: neonx8 gen() 13630 MB/s May 15 10:10:01.995235 kernel: raid6: neonx8 xor() 10676 MB/s May 15 10:10:02.012241 kernel: raid6: neonx4 gen() 13353 MB/s May 15 10:10:02.029234 kernel: raid6: neonx4 xor() 11046 MB/s May 15 10:10:02.046246 kernel: raid6: neonx2 gen() 12852 MB/s May 15 10:10:02.063236 kernel: raid6: neonx2 xor() 10176 MB/s May 15 10:10:02.080235 kernel: raid6: neonx1 gen() 10492 MB/s May 15 10:10:02.097250 kernel: raid6: neonx1 xor() 8699 MB/s May 15 10:10:02.114235 kernel: raid6: int64x8 gen() 6196 MB/s May 15 10:10:02.131244 kernel: raid6: int64x8 xor() 3522 MB/s May 15 10:10:02.148234 kernel: raid6: int64x4 gen() 7157 MB/s May 15 10:10:02.165243 kernel: raid6: int64x4 xor() 3826 MB/s May 15 10:10:02.182233 kernel: raid6: int64x2 gen() 6099 MB/s May 15 10:10:02.199235 kernel: raid6: int64x2 xor() 3287 MB/s May 15 10:10:02.216234 kernel: raid6: int64x1 gen() 4993 MB/s May 15 10:10:02.233331 kernel: raid6: int64x1 xor() 2620 MB/s May 15 10:10:02.233341 kernel: raid6: using algorithm neonx8 gen() 13630 MB/s May 15 10:10:02.233350 kernel: raid6: .... xor() 10676 MB/s, rmw enabled May 15 10:10:02.234430 kernel: raid6: using neon recovery algorithm May 15 10:10:02.245651 kernel: xor: measuring software checksum speed May 15 10:10:02.245665 kernel: 8regs : 16725 MB/sec May 15 10:10:02.245677 kernel: 32regs : 20103 MB/sec May 15 10:10:02.246278 kernel: arm64_neon : 27087 MB/sec May 15 10:10:02.246288 kernel: xor: using function: arm64_neon (27087 MB/sec) May 15 10:10:02.299237 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 15 10:10:02.309298 systemd[1]: Finished dracut-pre-udev.service. May 15 10:10:02.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:02.311138 systemd[1]: Starting systemd-udevd.service... May 15 10:10:02.314624 kernel: audit: type=1130 audit(1747303802.309:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:02.310000 audit: BPF prog-id=7 op=LOAD May 15 10:10:02.310000 audit: BPF prog-id=8 op=LOAD May 15 10:10:02.327012 systemd-udevd[492]: Using default interface naming scheme 'v252'. May 15 10:10:02.331449 systemd[1]: Started systemd-udevd.service. May 15 10:10:02.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:02.333476 systemd[1]: Starting dracut-pre-trigger.service... May 15 10:10:02.344435 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation May 15 10:10:02.370537 systemd[1]: Finished dracut-pre-trigger.service. May 15 10:10:02.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:02.372108 systemd[1]: Starting systemd-udev-trigger.service... May 15 10:10:02.405011 systemd[1]: Finished systemd-udev-trigger.service. May 15 10:10:02.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:02.433245 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 10:10:02.437434 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 10:10:02.437449 kernel: GPT:9289727 != 19775487 May 15 10:10:02.437458 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 10:10:02.437466 kernel: GPT:9289727 != 19775487 May 15 10:10:02.437480 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 10:10:02.437488 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:10:02.455323 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 15 10:10:02.457610 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (553) May 15 10:10:02.458583 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 15 10:10:02.464283 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 15 10:10:02.467891 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 15 10:10:02.474365 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 10:10:02.476082 systemd[1]: Starting disk-uuid.service... May 15 10:10:02.482030 disk-uuid[568]: Primary Header is updated. May 15 10:10:02.482030 disk-uuid[568]: Secondary Entries is updated. May 15 10:10:02.482030 disk-uuid[568]: Secondary Header is updated. May 15 10:10:02.486236 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:10:02.494237 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:10:02.497386 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:10:03.498000 disk-uuid[569]: The operation has completed successfully. May 15 10:10:03.499176 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:10:03.522094 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 10:10:03.523258 systemd[1]: Finished disk-uuid.service. May 15 10:10:03.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:03.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:03.528207 systemd[1]: Starting verity-setup.service... May 15 10:10:03.545252 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 10:10:03.564847 systemd[1]: Found device dev-mapper-usr.device. May 15 10:10:03.566946 systemd[1]: Mounting sysusr-usr.mount... May 15 10:10:03.568970 systemd[1]: Finished verity-setup.service. May 15 10:10:03.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:03.616106 systemd[1]: Mounted sysusr-usr.mount. May 15 10:10:03.617475 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 15 10:10:03.616975 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 15 10:10:03.617654 systemd[1]: Starting ignition-setup.service... May 15 10:10:03.620145 systemd[1]: Starting parse-ip-for-networkd.service... May 15 10:10:03.626708 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 10:10:03.626741 kernel: BTRFS info (device vda6): using free space tree May 15 10:10:03.626755 kernel: BTRFS info (device vda6): has skinny extents May 15 10:10:03.634557 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 10:10:03.641102 systemd[1]: Finished ignition-setup.service. May 15 10:10:03.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:03.642609 systemd[1]: Starting ignition-fetch-offline.service... May 15 10:10:03.695362 systemd[1]: Finished parse-ip-for-networkd.service. May 15 10:10:03.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:03.696000 audit: BPF prog-id=9 op=LOAD May 15 10:10:03.697546 systemd[1]: Starting systemd-networkd.service... May 15 10:10:03.724473 ignition[661]: Ignition 2.14.0 May 15 10:10:03.724484 ignition[661]: Stage: fetch-offline May 15 10:10:03.724519 ignition[661]: no configs at "/usr/lib/ignition/base.d" May 15 10:10:03.724528 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:10:03.724653 ignition[661]: parsed url from cmdline: "" May 15 10:10:03.724656 ignition[661]: no config URL provided May 15 10:10:03.724661 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" May 15 10:10:03.724667 ignition[661]: no config at "/usr/lib/ignition/user.ign" May 15 10:10:03.729977 systemd-networkd[747]: lo: Link UP May 15 10:10:03.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:03.724685 ignition[661]: op(1): [started] loading QEMU firmware config module May 15 10:10:03.729980 systemd-networkd[747]: lo: Gained carrier May 15 10:10:03.724692 ignition[661]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 10:10:03.730333 systemd-networkd[747]: Enumeration completed May 15 10:10:03.736294 ignition[661]: op(1): [finished] loading QEMU firmware config module May 15 10:10:03.730426 systemd[1]: Started systemd-networkd.service. May 15 10:10:03.730511 systemd-networkd[747]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 10:10:03.731610 systemd-networkd[747]: eth0: Link UP May 15 10:10:03.731613 systemd-networkd[747]: eth0: Gained carrier May 15 10:10:03.732001 systemd[1]: Reached target network.target. May 15 10:10:03.733957 systemd[1]: Starting iscsiuio.service... May 15 10:10:03.742780 systemd[1]: Started iscsiuio.service. May 15 10:10:03.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:03.744585 systemd[1]: Starting iscsid.service... May 15 10:10:03.747844 iscsid[754]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 15 10:10:03.747844 iscsid[754]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 15 10:10:03.747844 iscsid[754]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 15 10:10:03.747844 iscsid[754]: If using hardware iscsi like qla4xxx this message can be ignored. May 15 10:10:03.747844 iscsid[754]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 15 10:10:03.747844 iscsid[754]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 15 10:10:03.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:03.751034 systemd[1]: Started iscsid.service. May 15 10:10:03.756707 systemd[1]: Starting dracut-initqueue.service... May 15 10:10:03.757299 systemd-networkd[747]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 10:10:03.766628 systemd[1]: Finished dracut-initqueue.service. May 15 10:10:03.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:03.767627 systemd[1]: Reached target remote-fs-pre.target. May 15 10:10:03.769141 systemd[1]: Reached target remote-cryptsetup.target. May 15 10:10:03.770820 systemd[1]: Reached target remote-fs.target. May 15 10:10:03.773036 systemd[1]: Starting dracut-pre-mount.service... May 15 10:10:03.780391 systemd[1]: Finished dracut-pre-mount.service. May 15 10:10:03.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:03.800687 ignition[661]: parsing config with SHA512: 5aca3109175c47cadb09f8683aa3e780c20715c281c0c6f2824f24aaa368c55713d678e511bc4c1101d45bbb62217ea51099c3c7eaa5d0cb97a8635c4ba7a004 May 15 10:10:03.811291 unknown[661]: fetched base config from "system" May 15 10:10:03.811302 unknown[661]: fetched user config from "qemu" May 15 10:10:03.811769 ignition[661]: fetch-offline: fetch-offline passed May 15 10:10:03.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:03.813078 systemd[1]: Finished ignition-fetch-offline.service. May 15 10:10:03.811821 ignition[661]: Ignition finished successfully May 15 10:10:03.814661 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 10:10:03.815413 systemd[1]: Starting ignition-kargs.service... May 15 10:10:03.824007 ignition[768]: Ignition 2.14.0 May 15 10:10:03.824018 ignition[768]: Stage: kargs May 15 10:10:03.824110 ignition[768]: no configs at "/usr/lib/ignition/base.d" May 15 10:10:03.826268 systemd[1]: Finished ignition-kargs.service. May 15 10:10:03.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:03.824120 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:10:03.825075 ignition[768]: kargs: kargs passed May 15 10:10:03.828577 systemd[1]: Starting ignition-disks.service... May 15 10:10:03.825119 ignition[768]: Ignition finished successfully May 15 10:10:03.835286 ignition[774]: Ignition 2.14.0 May 15 10:10:03.835296 ignition[774]: Stage: disks May 15 10:10:03.835404 ignition[774]: no configs at "/usr/lib/ignition/base.d" May 15 10:10:03.835414 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:10:03.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:03.837050 systemd[1]: Finished ignition-disks.service. May 15 10:10:03.836268 ignition[774]: disks: disks passed May 15 10:10:03.838256 systemd[1]: Reached target initrd-root-device.target. May 15 10:10:03.836310 ignition[774]: Ignition finished successfully May 15 10:10:03.839878 systemd[1]: Reached target local-fs-pre.target. May 15 10:10:03.841306 systemd[1]: Reached target local-fs.target. May 15 10:10:03.842486 systemd[1]: Reached target sysinit.target. May 15 10:10:03.843844 systemd[1]: Reached target basic.target. May 15 10:10:03.845972 systemd[1]: Starting systemd-fsck-root.service... May 15 10:10:03.856236 systemd-fsck[782]: ROOT: clean, 623/553520 files, 56022/553472 blocks May 15 10:10:03.859981 systemd[1]: Finished systemd-fsck-root.service. May 15 10:10:03.861701 systemd[1]: Mounting sysroot.mount... May 15 10:10:03.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:03.868256 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 15 10:10:03.868547 systemd[1]: Mounted sysroot.mount. May 15 10:10:03.869310 systemd[1]: Reached target initrd-root-fs.target. May 15 10:10:03.872043 systemd[1]: Mounting sysroot-usr.mount... May 15 10:10:03.872968 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 15 10:10:03.873007 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 10:10:03.873029 systemd[1]: Reached target ignition-diskful.target. May 15 10:10:03.874874 systemd[1]: Mounted sysroot-usr.mount. May 15 10:10:03.876684 systemd[1]: Starting initrd-setup-root.service... May 15 10:10:03.880893 initrd-setup-root[792]: cut: /sysroot/etc/passwd: No such file or directory May 15 10:10:03.884426 initrd-setup-root[800]: cut: /sysroot/etc/group: No such file or directory May 15 10:10:03.888593 initrd-setup-root[808]: cut: /sysroot/etc/shadow: No such file or directory May 15 10:10:03.892801 initrd-setup-root[816]: cut: /sysroot/etc/gshadow: No such file or directory May 15 10:10:03.918931 systemd[1]: Finished initrd-setup-root.service. May 15 10:10:03.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:03.920701 systemd[1]: Starting ignition-mount.service... May 15 10:10:03.922065 systemd[1]: Starting sysroot-boot.service... May 15 10:10:03.925782 bash[833]: umount: /sysroot/usr/share/oem: not mounted. May 15 10:10:03.932942 ignition[834]: INFO : Ignition 2.14.0 May 15 10:10:03.932942 ignition[834]: INFO : Stage: mount May 15 10:10:03.935286 ignition[834]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:10:03.935286 ignition[834]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:10:03.935286 ignition[834]: INFO : mount: mount passed May 15 10:10:03.935286 ignition[834]: INFO : Ignition finished successfully May 15 10:10:03.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:03.935340 systemd[1]: Finished ignition-mount.service. May 15 10:10:03.941541 systemd[1]: Finished sysroot-boot.service. May 15 10:10:03.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:04.576188 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 15 10:10:04.582227 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (843) May 15 10:10:04.584503 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 10:10:04.584520 kernel: BTRFS info (device vda6): using free space tree May 15 10:10:04.584529 kernel: BTRFS info (device vda6): has skinny extents May 15 10:10:04.587319 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 15 10:10:04.588864 systemd[1]: Starting ignition-files.service... May 15 10:10:04.602510 ignition[863]: INFO : Ignition 2.14.0 May 15 10:10:04.602510 ignition[863]: INFO : Stage: files May 15 10:10:04.604107 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:10:04.604107 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:10:04.604107 ignition[863]: DEBUG : files: compiled without relabeling support, skipping May 15 10:10:04.609593 ignition[863]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 10:10:04.609593 ignition[863]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 10:10:04.613516 ignition[863]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 10:10:04.614829 ignition[863]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 10:10:04.616319 unknown[863]: wrote ssh authorized keys file for user: core May 15 10:10:04.617413 ignition[863]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 10:10:04.617413 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 15 10:10:04.617413 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 15 10:10:04.617413 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 10:10:04.617413 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 15 10:10:04.705450 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 10:10:04.916360 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 10:10:04.916360 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 10:10:04.919963 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 10:10:04.919963 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 10:10:04.919963 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 10:10:04.919963 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 10:10:04.919963 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 10:10:04.919963 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 10:10:04.919963 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 10:10:04.919963 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 10:10:04.919963 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 10:10:04.919963 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 10:10:04.919963 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 10:10:04.919963 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 10:10:04.919963 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 15 10:10:05.277661 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 10:10:05.286533 systemd-networkd[747]: eth0: Gained IPv6LL May 15 10:10:05.484510 ignition[863]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 10:10:05.484510 ignition[863]: INFO : files: op(c): [started] processing unit "containerd.service" May 15 10:10:05.487903 ignition[863]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 15 10:10:05.487903 ignition[863]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 15 10:10:05.487903 ignition[863]: INFO : files: op(c): [finished] processing unit "containerd.service" May 15 10:10:05.487903 ignition[863]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 15 10:10:05.487903 ignition[863]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 10:10:05.487903 ignition[863]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 10:10:05.487903 ignition[863]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 15 10:10:05.487903 ignition[863]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 15 10:10:05.487903 ignition[863]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 10:10:05.487903 ignition[863]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 10:10:05.487903 ignition[863]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 15 10:10:05.487903 ignition[863]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" May 15 10:10:05.487903 ignition[863]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 10:10:05.533227 ignition[863]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 10:10:05.534800 ignition[863]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" May 15 10:10:05.534800 ignition[863]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" May 15 10:10:05.534800 ignition[863]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" May 15 10:10:05.534800 ignition[863]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 10:10:05.534800 ignition[863]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 10:10:05.534800 ignition[863]: INFO : files: files passed May 15 10:10:05.534800 ignition[863]: INFO : Ignition finished successfully May 15 10:10:05.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.534740 systemd[1]: Finished ignition-files.service. May 15 10:10:05.536387 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 15 10:10:05.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.537830 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 15 10:10:05.552089 initrd-setup-root-after-ignition[888]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 15 10:10:05.538488 systemd[1]: Starting ignition-quench.service... May 15 10:10:05.554882 initrd-setup-root-after-ignition[890]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 10:10:05.543443 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 10:10:05.543524 systemd[1]: Finished ignition-quench.service. May 15 10:10:05.547557 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 15 10:10:05.548970 systemd[1]: Reached target ignition-complete.target. May 15 10:10:05.551070 systemd[1]: Starting initrd-parse-etc.service... May 15 10:10:05.562842 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 10:10:05.562928 systemd[1]: Finished initrd-parse-etc.service. May 15 10:10:05.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.564629 systemd[1]: Reached target initrd-fs.target. May 15 10:10:05.565967 systemd[1]: Reached target initrd.target. May 15 10:10:05.567288 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 15 10:10:05.568007 systemd[1]: Starting dracut-pre-pivot.service... May 15 10:10:05.577779 systemd[1]: Finished dracut-pre-pivot.service. May 15 10:10:05.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.579344 systemd[1]: Starting initrd-cleanup.service... May 15 10:10:05.587160 systemd[1]: Stopped target nss-lookup.target. May 15 10:10:05.588107 systemd[1]: Stopped target remote-cryptsetup.target. May 15 10:10:05.589551 systemd[1]: Stopped target timers.target. May 15 10:10:05.590887 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 10:10:05.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.590992 systemd[1]: Stopped dracut-pre-pivot.service. May 15 10:10:05.592273 systemd[1]: Stopped target initrd.target. May 15 10:10:05.593649 systemd[1]: Stopped target basic.target. May 15 10:10:05.594915 systemd[1]: Stopped target ignition-complete.target. May 15 10:10:05.596270 systemd[1]: Stopped target ignition-diskful.target. May 15 10:10:05.597645 systemd[1]: Stopped target initrd-root-device.target. May 15 10:10:05.599097 systemd[1]: Stopped target remote-fs.target. May 15 10:10:05.600468 systemd[1]: Stopped target remote-fs-pre.target. May 15 10:10:05.601886 systemd[1]: Stopped target sysinit.target. May 15 10:10:05.603122 systemd[1]: Stopped target local-fs.target. May 15 10:10:05.604455 systemd[1]: Stopped target local-fs-pre.target. May 15 10:10:05.605764 systemd[1]: Stopped target swap.target. May 15 10:10:05.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.606982 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 10:10:05.607087 systemd[1]: Stopped dracut-pre-mount.service. May 15 10:10:05.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.608444 systemd[1]: Stopped target cryptsetup.target. May 15 10:10:05.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.609585 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 10:10:05.609687 systemd[1]: Stopped dracut-initqueue.service. May 15 10:10:05.611134 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 10:10:05.611241 systemd[1]: Stopped ignition-fetch-offline.service. May 15 10:10:05.612715 systemd[1]: Stopped target paths.target. May 15 10:10:05.613794 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 10:10:05.619239 systemd[1]: Stopped systemd-ask-password-console.path. May 15 10:10:05.620157 systemd[1]: Stopped target slices.target. May 15 10:10:05.621547 systemd[1]: Stopped target sockets.target. May 15 10:10:05.622808 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 10:10:05.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.622919 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 15 10:10:05.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.624277 systemd[1]: ignition-files.service: Deactivated successfully. May 15 10:10:05.624380 systemd[1]: Stopped ignition-files.service. May 15 10:10:05.628877 iscsid[754]: iscsid shutting down. May 15 10:10:05.626756 systemd[1]: Stopping ignition-mount.service... May 15 10:10:05.628407 systemd[1]: Stopping iscsid.service... May 15 10:10:05.630081 systemd[1]: Stopping sysroot-boot.service... May 15 10:10:05.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.631293 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 10:10:05.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.635129 ignition[903]: INFO : Ignition 2.14.0 May 15 10:10:05.635129 ignition[903]: INFO : Stage: umount May 15 10:10:05.635129 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:10:05.635129 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:10:05.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.631427 systemd[1]: Stopped systemd-udev-trigger.service. May 15 10:10:05.642549 ignition[903]: INFO : umount: umount passed May 15 10:10:05.642549 ignition[903]: INFO : Ignition finished successfully May 15 10:10:05.632940 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 10:10:05.633083 systemd[1]: Stopped dracut-pre-trigger.service. May 15 10:10:05.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.636621 systemd[1]: iscsid.service: Deactivated successfully. May 15 10:10:05.636730 systemd[1]: Stopped iscsid.service. May 15 10:10:05.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.638073 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 10:10:05.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.638155 systemd[1]: Stopped ignition-mount.service. May 15 10:10:05.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.641738 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 10:10:05.644674 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 10:10:05.644757 systemd[1]: Finished initrd-cleanup.service. May 15 10:10:05.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.646045 systemd[1]: iscsid.socket: Deactivated successfully. May 15 10:10:05.646083 systemd[1]: Closed iscsid.socket. May 15 10:10:05.647200 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 10:10:05.647263 systemd[1]: Stopped ignition-disks.service. May 15 10:10:05.648653 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 10:10:05.648694 systemd[1]: Stopped ignition-kargs.service. May 15 10:10:05.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.650075 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 10:10:05.650113 systemd[1]: Stopped ignition-setup.service. May 15 10:10:05.652259 systemd[1]: Stopping iscsiuio.service... May 15 10:10:05.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.653982 systemd[1]: iscsiuio.service: Deactivated successfully. May 15 10:10:05.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.654070 systemd[1]: Stopped iscsiuio.service. May 15 10:10:05.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.655435 systemd[1]: Stopped target network.target. May 15 10:10:05.656711 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 10:10:05.656748 systemd[1]: Closed iscsiuio.socket. May 15 10:10:05.658123 systemd[1]: Stopping systemd-networkd.service... May 15 10:10:05.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.659485 systemd[1]: Stopping systemd-resolved.service... May 15 10:10:05.661564 systemd-networkd[747]: eth0: DHCPv6 lease lost May 15 10:10:05.679000 audit: BPF prog-id=9 op=UNLOAD May 15 10:10:05.680000 audit: BPF prog-id=6 op=UNLOAD May 15 10:10:05.662774 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 10:10:05.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.662864 systemd[1]: Stopped systemd-networkd.service. May 15 10:10:05.664392 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 10:10:05.664424 systemd[1]: Closed systemd-networkd.socket. May 15 10:10:05.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.666258 systemd[1]: Stopping network-cleanup.service... May 15 10:10:05.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.667090 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 10:10:05.667148 systemd[1]: Stopped parse-ip-for-networkd.service. May 15 10:10:05.668685 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 10:10:05.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.668724 systemd[1]: Stopped systemd-sysctl.service. May 15 10:10:05.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.670852 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 10:10:05.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.670892 systemd[1]: Stopped systemd-modules-load.service. May 15 10:10:05.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.671837 systemd[1]: Stopping systemd-udevd.service... May 15 10:10:05.676198 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 10:10:05.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.676676 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 10:10:05.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.676759 systemd[1]: Stopped systemd-resolved.service. May 15 10:10:05.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.680059 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 10:10:05.680149 systemd[1]: Stopped network-cleanup.service. May 15 10:10:05.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:05.683882 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 10:10:05.683954 systemd[1]: Stopped sysroot-boot.service. May 15 10:10:05.685008 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 10:10:05.685115 systemd[1]: Stopped systemd-udevd.service. May 15 10:10:05.686401 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 10:10:05.686434 systemd[1]: Closed systemd-udevd-control.socket. May 15 10:10:05.687558 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 10:10:05.687590 systemd[1]: Closed systemd-udevd-kernel.socket. May 15 10:10:05.689073 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 10:10:05.689115 systemd[1]: Stopped dracut-pre-udev.service. May 15 10:10:05.690612 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 10:10:05.714000 audit: BPF prog-id=8 op=UNLOAD May 15 10:10:05.714000 audit: BPF prog-id=7 op=UNLOAD May 15 10:10:05.690651 systemd[1]: Stopped dracut-cmdline.service. May 15 10:10:05.715000 audit: BPF prog-id=5 op=UNLOAD May 15 10:10:05.715000 audit: BPF prog-id=4 op=UNLOAD May 15 10:10:05.715000 audit: BPF prog-id=3 op=UNLOAD May 15 10:10:05.692070 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 10:10:05.692109 systemd[1]: Stopped dracut-cmdline-ask.service. May 15 10:10:05.693456 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 10:10:05.693494 systemd[1]: Stopped initrd-setup-root.service. May 15 10:10:05.695645 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 15 10:10:05.696486 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 10:10:05.696540 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 15 10:10:05.698998 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 10:10:05.699036 systemd[1]: Stopped kmod-static-nodes.service. May 15 10:10:05.699878 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 10:10:05.699921 systemd[1]: Stopped systemd-vconsole-setup.service. May 15 10:10:05.702195 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 10:10:05.702610 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 10:10:05.702694 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 15 10:10:05.704019 systemd[1]: Reached target initrd-switch-root.target. May 15 10:10:05.705983 systemd[1]: Starting initrd-switch-root.service... May 15 10:10:05.732466 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). May 15 10:10:05.712756 systemd[1]: Switching root. May 15 10:10:05.733014 systemd-journald[290]: Journal stopped May 15 10:10:07.796683 kernel: SELinux: Class mctp_socket not defined in policy. May 15 10:10:07.796735 kernel: SELinux: Class anon_inode not defined in policy. May 15 10:10:07.796749 kernel: SELinux: the above unknown classes and permissions will be allowed May 15 10:10:07.796759 kernel: SELinux: policy capability network_peer_controls=1 May 15 10:10:07.796773 kernel: SELinux: policy capability open_perms=1 May 15 10:10:07.796783 kernel: SELinux: policy capability extended_socket_class=1 May 15 10:10:07.796792 kernel: SELinux: policy capability always_check_network=0 May 15 10:10:07.796802 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 10:10:07.796811 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 10:10:07.796825 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 10:10:07.796834 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 10:10:07.796846 systemd[1]: Successfully loaded SELinux policy in 33.642ms. May 15 10:10:07.796863 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.447ms. May 15 10:10:07.796875 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 10:10:07.796888 systemd[1]: Detected virtualization kvm. May 15 10:10:07.796898 systemd[1]: Detected architecture arm64. May 15 10:10:07.796910 systemd[1]: Detected first boot. May 15 10:10:07.796921 systemd[1]: Initializing machine ID from VM UUID. May 15 10:10:07.796931 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 15 10:10:07.796941 kernel: kauditd_printk_skb: 72 callbacks suppressed May 15 10:10:07.796952 kernel: audit: type=1400 audit(1747303806.012:83): avc: denied { associate } for pid=953 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 15 10:10:07.796963 kernel: audit: type=1300 audit(1747303806.012:83): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001056ac a1=4000028b40 a2=4000026a00 a3=32 items=0 ppid=936 pid=953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:07.796974 kernel: audit: type=1327 audit(1747303806.012:83): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 10:10:07.796985 kernel: audit: type=1400 audit(1747303806.013:84): avc: denied { associate } for pid=953 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 15 10:10:07.796996 kernel: audit: type=1300 audit(1747303806.013:84): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000105789 a2=1ed a3=0 items=2 ppid=936 pid=953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:07.797006 kernel: audit: type=1307 audit(1747303806.013:84): cwd="/" May 15 10:10:07.797020 kernel: audit: type=1302 audit(1747303806.013:84): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:10:07.797032 kernel: audit: type=1302 audit(1747303806.013:84): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:10:07.797045 kernel: audit: type=1327 audit(1747303806.013:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 10:10:07.797059 systemd[1]: Populated /etc with preset unit settings. May 15 10:10:07.797070 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:10:07.797082 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:10:07.797093 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:10:07.797108 systemd[1]: Queued start job for default target multi-user.target. May 15 10:10:07.797119 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 15 10:10:07.797130 systemd[1]: Created slice system-addon\x2dconfig.slice. May 15 10:10:07.797140 systemd[1]: Created slice system-addon\x2drun.slice. May 15 10:10:07.797151 systemd[1]: Created slice system-getty.slice. May 15 10:10:07.797161 systemd[1]: Created slice system-modprobe.slice. May 15 10:10:07.797174 systemd[1]: Created slice system-serial\x2dgetty.slice. May 15 10:10:07.797185 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 15 10:10:07.797195 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 15 10:10:07.797205 systemd[1]: Created slice user.slice. May 15 10:10:07.797239 systemd[1]: Started systemd-ask-password-console.path. May 15 10:10:07.797250 systemd[1]: Started systemd-ask-password-wall.path. May 15 10:10:07.797260 systemd[1]: Set up automount boot.automount. May 15 10:10:07.797271 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 15 10:10:07.797282 systemd[1]: Reached target integritysetup.target. May 15 10:10:07.797293 systemd[1]: Reached target remote-cryptsetup.target. May 15 10:10:07.797303 systemd[1]: Reached target remote-fs.target. May 15 10:10:07.797313 systemd[1]: Reached target slices.target. May 15 10:10:07.797324 systemd[1]: Reached target swap.target. May 15 10:10:07.797334 systemd[1]: Reached target torcx.target. May 15 10:10:07.797348 systemd[1]: Reached target veritysetup.target. May 15 10:10:07.797358 systemd[1]: Listening on systemd-coredump.socket. May 15 10:10:07.797370 systemd[1]: Listening on systemd-initctl.socket. May 15 10:10:07.797382 kernel: audit: type=1400 audit(1747303807.692:85): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 10:10:07.797392 systemd[1]: Listening on systemd-journald-audit.socket. May 15 10:10:07.797403 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 10:10:07.797413 systemd[1]: Listening on systemd-journald.socket. May 15 10:10:07.797424 systemd[1]: Listening on systemd-networkd.socket. May 15 10:10:07.797435 systemd[1]: Listening on systemd-udevd-control.socket. May 15 10:10:07.797446 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 10:10:07.797456 systemd[1]: Listening on systemd-userdbd.socket. May 15 10:10:07.797467 systemd[1]: Mounting dev-hugepages.mount... May 15 10:10:07.797477 systemd[1]: Mounting dev-mqueue.mount... May 15 10:10:07.797487 systemd[1]: Mounting media.mount... May 15 10:10:07.797498 systemd[1]: Mounting sys-kernel-debug.mount... May 15 10:10:07.797508 systemd[1]: Mounting sys-kernel-tracing.mount... May 15 10:10:07.797518 systemd[1]: Mounting tmp.mount... May 15 10:10:07.797528 systemd[1]: Starting flatcar-tmpfiles.service... May 15 10:10:07.797539 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:10:07.797549 systemd[1]: Starting kmod-static-nodes.service... May 15 10:10:07.797561 systemd[1]: Starting modprobe@configfs.service... May 15 10:10:07.797572 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:10:07.797582 systemd[1]: Starting modprobe@drm.service... May 15 10:10:07.797593 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:10:07.797603 systemd[1]: Starting modprobe@fuse.service... May 15 10:10:07.797614 systemd[1]: Starting modprobe@loop.service... May 15 10:10:07.797625 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 10:10:07.797635 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 15 10:10:07.797646 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 15 10:10:07.797672 systemd[1]: Starting systemd-journald.service... May 15 10:10:07.797683 kernel: fuse: init (API version 7.34) May 15 10:10:07.797693 kernel: loop: module loaded May 15 10:10:07.797703 systemd[1]: Starting systemd-modules-load.service... May 15 10:10:07.797714 systemd[1]: Starting systemd-network-generator.service... May 15 10:10:07.797724 systemd[1]: Starting systemd-remount-fs.service... May 15 10:10:07.797734 systemd[1]: Starting systemd-udev-trigger.service... May 15 10:10:07.797758 systemd[1]: Mounted dev-hugepages.mount. May 15 10:10:07.797776 systemd[1]: Mounted dev-mqueue.mount. May 15 10:10:07.797788 systemd[1]: Mounted media.mount. May 15 10:10:07.797799 systemd[1]: Mounted sys-kernel-debug.mount. May 15 10:10:07.797809 systemd[1]: Mounted sys-kernel-tracing.mount. May 15 10:10:07.797819 systemd[1]: Mounted tmp.mount. May 15 10:10:07.797830 systemd[1]: Finished kmod-static-nodes.service. May 15 10:10:07.797840 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 10:10:07.797851 systemd[1]: Finished modprobe@configfs.service. May 15 10:10:07.797861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:10:07.797871 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:10:07.797883 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 10:10:07.797893 systemd[1]: Finished modprobe@drm.service. May 15 10:10:07.797906 systemd-journald[1034]: Journal started May 15 10:10:07.797948 systemd-journald[1034]: Runtime Journal (/run/log/journal/d8fa276d7d0141b9a11d94b1600e267a) is 6.0M, max 48.7M, 42.6M free. May 15 10:10:07.692000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 10:10:07.692000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 15 10:10:07.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.794000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 15 10:10:07.794000 audit[1034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffecfdaf20 a2=4000 a3=1 items=0 ppid=1 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:07.794000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 15 10:10:07.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.799911 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:10:07.799971 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:10:07.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.802434 systemd[1]: Started systemd-journald.service. May 15 10:10:07.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.803433 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 10:10:07.803669 systemd[1]: Finished modprobe@fuse.service. May 15 10:10:07.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.805010 systemd[1]: Finished flatcar-tmpfiles.service. May 15 10:10:07.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.806098 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:10:07.806353 systemd[1]: Finished modprobe@loop.service. May 15 10:10:07.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.807736 systemd[1]: Finished systemd-modules-load.service. May 15 10:10:07.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.809204 systemd[1]: Finished systemd-network-generator.service. May 15 10:10:07.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.810738 systemd[1]: Finished systemd-remount-fs.service. May 15 10:10:07.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.812085 systemd[1]: Reached target network-pre.target. May 15 10:10:07.814060 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 15 10:10:07.816044 systemd[1]: Mounting sys-kernel-config.mount... May 15 10:10:07.816852 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 10:10:07.819772 systemd[1]: Starting systemd-hwdb-update.service... May 15 10:10:07.821909 systemd[1]: Starting systemd-journal-flush.service... May 15 10:10:07.823154 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:10:07.824308 systemd[1]: Starting systemd-random-seed.service... May 15 10:10:07.825407 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:10:07.826615 systemd-journald[1034]: Time spent on flushing to /var/log/journal/d8fa276d7d0141b9a11d94b1600e267a is 20.389ms for 938 entries. May 15 10:10:07.826615 systemd-journald[1034]: System Journal (/var/log/journal/d8fa276d7d0141b9a11d94b1600e267a) is 8.0M, max 195.6M, 187.6M free. May 15 10:10:07.865715 systemd-journald[1034]: Received client request to flush runtime journal. May 15 10:10:07.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.826465 systemd[1]: Starting systemd-sysctl.service... May 15 10:10:07.829453 systemd[1]: Starting systemd-sysusers.service... May 15 10:10:07.832735 systemd[1]: Finished systemd-udev-trigger.service. May 15 10:10:07.834136 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 15 10:10:07.867009 udevadm[1086]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 10:10:07.835262 systemd[1]: Mounted sys-kernel-config.mount. May 15 10:10:07.837356 systemd[1]: Starting systemd-udev-settle.service... May 15 10:10:07.838492 systemd[1]: Finished systemd-random-seed.service. May 15 10:10:07.839620 systemd[1]: Reached target first-boot-complete.target. May 15 10:10:07.843627 systemd[1]: Finished systemd-sysctl.service. May 15 10:10:07.850911 systemd[1]: Finished systemd-sysusers.service. May 15 10:10:07.852934 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 10:10:07.866690 systemd[1]: Finished systemd-journal-flush.service. May 15 10:10:07.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:07.877400 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 10:10:07.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.193603 systemd[1]: Finished systemd-hwdb-update.service. May 15 10:10:08.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.195708 systemd[1]: Starting systemd-udevd.service... May 15 10:10:08.219816 systemd-udevd[1095]: Using default interface naming scheme 'v252'. May 15 10:10:08.231517 systemd[1]: Started systemd-udevd.service. May 15 10:10:08.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.233801 systemd[1]: Starting systemd-networkd.service... May 15 10:10:08.240751 systemd[1]: Starting systemd-userdbd.service... May 15 10:10:08.261647 systemd[1]: Found device dev-ttyAMA0.device. May 15 10:10:08.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.296428 systemd[1]: Started systemd-userdbd.service. May 15 10:10:08.320591 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 10:10:08.346175 systemd-networkd[1097]: lo: Link UP May 15 10:10:08.346185 systemd-networkd[1097]: lo: Gained carrier May 15 10:10:08.346549 systemd-networkd[1097]: Enumeration completed May 15 10:10:08.346644 systemd-networkd[1097]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 10:10:08.346658 systemd[1]: Started systemd-networkd.service. May 15 10:10:08.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.349565 systemd-networkd[1097]: eth0: Link UP May 15 10:10:08.349575 systemd-networkd[1097]: eth0: Gained carrier May 15 10:10:08.353676 systemd[1]: Finished systemd-udev-settle.service. May 15 10:10:08.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.355660 systemd[1]: Starting lvm2-activation-early.service... May 15 10:10:08.363490 lvm[1129]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 10:10:08.369330 systemd-networkd[1097]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 10:10:08.390016 systemd[1]: Finished lvm2-activation-early.service. May 15 10:10:08.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.390999 systemd[1]: Reached target cryptsetup.target. May 15 10:10:08.392856 systemd[1]: Starting lvm2-activation.service... May 15 10:10:08.396284 lvm[1131]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 10:10:08.428054 systemd[1]: Finished lvm2-activation.service. May 15 10:10:08.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.428994 systemd[1]: Reached target local-fs-pre.target. May 15 10:10:08.429845 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 10:10:08.429874 systemd[1]: Reached target local-fs.target. May 15 10:10:08.430638 systemd[1]: Reached target machines.target. May 15 10:10:08.432523 systemd[1]: Starting ldconfig.service... May 15 10:10:08.433504 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:10:08.433555 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:10:08.434550 systemd[1]: Starting systemd-boot-update.service... May 15 10:10:08.436373 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 15 10:10:08.438482 systemd[1]: Starting systemd-machine-id-commit.service... May 15 10:10:08.440431 systemd[1]: Starting systemd-sysext.service... May 15 10:10:08.441484 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1135 (bootctl) May 15 10:10:08.442494 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 15 10:10:08.448890 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 15 10:10:08.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.452200 systemd[1]: Unmounting usr-share-oem.mount... May 15 10:10:08.455556 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 15 10:10:08.455768 systemd[1]: Unmounted usr-share-oem.mount. May 15 10:10:08.471241 kernel: loop0: detected capacity change from 0 to 194096 May 15 10:10:08.514172 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 10:10:08.516099 systemd[1]: Finished systemd-machine-id-commit.service. May 15 10:10:08.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.527250 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 10:10:08.527709 systemd-fsck[1144]: fsck.fat 4.2 (2021-01-31) May 15 10:10:08.527709 systemd-fsck[1144]: /dev/vda1: 236 files, 117182/258078 clusters May 15 10:10:08.529618 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 15 10:10:08.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.531989 systemd[1]: Mounting boot.mount... May 15 10:10:08.541086 systemd[1]: Mounted boot.mount. May 15 10:10:08.544248 kernel: loop1: detected capacity change from 0 to 194096 May 15 10:10:08.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.549133 systemd[1]: Finished systemd-boot-update.service. May 15 10:10:08.549238 (sd-sysext)[1155]: Using extensions 'kubernetes'. May 15 10:10:08.549556 (sd-sysext)[1155]: Merged extensions into '/usr'. May 15 10:10:08.568599 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:10:08.569761 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:10:08.571676 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:10:08.573537 systemd[1]: Starting modprobe@loop.service... May 15 10:10:08.574466 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:10:08.574614 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:10:08.575702 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:10:08.575876 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:10:08.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.577366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:10:08.577510 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:10:08.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.579182 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:10:08.579414 systemd[1]: Finished modprobe@loop.service. May 15 10:10:08.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.580691 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:10:08.580784 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:10:08.616715 ldconfig[1134]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 10:10:08.620792 systemd[1]: Finished ldconfig.service. May 15 10:10:08.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.776332 systemd[1]: Mounting usr-share-oem.mount... May 15 10:10:08.781393 systemd[1]: Mounted usr-share-oem.mount. May 15 10:10:08.783276 systemd[1]: Finished systemd-sysext.service. May 15 10:10:08.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.785296 systemd[1]: Starting ensure-sysext.service... May 15 10:10:08.786946 systemd[1]: Starting systemd-tmpfiles-setup.service... May 15 10:10:08.791483 systemd[1]: Reloading. May 15 10:10:08.795921 systemd-tmpfiles[1171]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 15 10:10:08.797019 systemd-tmpfiles[1171]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 10:10:08.798322 systemd-tmpfiles[1171]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 10:10:08.831958 /usr/lib/systemd/system-generators/torcx-generator[1191]: time="2025-05-15T10:10:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:10:08.831985 /usr/lib/systemd/system-generators/torcx-generator[1191]: time="2025-05-15T10:10:08Z" level=info msg="torcx already run" May 15 10:10:08.895315 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:10:08.895333 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:10:08.912334 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:10:08.953146 systemd[1]: Finished systemd-tmpfiles-setup.service. May 15 10:10:08.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.957069 systemd[1]: Starting audit-rules.service... May 15 10:10:08.958912 systemd[1]: Starting clean-ca-certificates.service... May 15 10:10:08.961015 systemd[1]: Starting systemd-journal-catalog-update.service... May 15 10:10:08.963549 systemd[1]: Starting systemd-resolved.service... May 15 10:10:08.965729 systemd[1]: Starting systemd-timesyncd.service... May 15 10:10:08.968327 systemd[1]: Starting systemd-update-utmp.service... May 15 10:10:08.969769 systemd[1]: Finished clean-ca-certificates.service. May 15 10:10:08.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.975135 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:10:08.977000 audit[1248]: SYSTEM_BOOT pid=1248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 15 10:10:08.979029 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:10:08.980415 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:10:08.982393 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:10:08.984314 systemd[1]: Starting modprobe@loop.service... May 15 10:10:08.988366 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:10:08.988523 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:10:08.988647 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:10:08.989714 systemd[1]: Finished systemd-journal-catalog-update.service. May 15 10:10:08.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.991482 systemd[1]: Finished systemd-update-utmp.service. May 15 10:10:08.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.992759 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:10:08.992902 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:10:08.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.994287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:10:08.994450 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:10:08.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.995750 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:10:08.995913 systemd[1]: Finished modprobe@loop.service. May 15 10:10:08.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:08.998088 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:10:08.998194 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:10:08.999780 systemd[1]: Starting systemd-update-done.service... May 15 10:10:09.002700 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:10:09.003984 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:10:09.005926 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:10:09.008003 systemd[1]: Starting modprobe@loop.service... May 15 10:10:09.009012 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:10:09.009145 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:10:09.009355 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:10:09.010245 systemd[1]: Finished systemd-update-done.service. May 15 10:10:09.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:09.011560 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:10:09.011708 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:10:09.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:09.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:09.012930 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:10:09.013076 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:10:09.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:09.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:09.014484 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:10:09.014644 systemd[1]: Finished modprobe@loop.service. May 15 10:10:09.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:09.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:09.018069 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:10:09.019477 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:10:09.022990 systemd[1]: Starting modprobe@drm.service... May 15 10:10:09.025051 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:10:09.027210 systemd[1]: Starting modprobe@loop.service... May 15 10:10:09.028033 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:10:09.028175 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:10:09.029588 systemd[1]: Starting systemd-networkd-wait-online.service... May 15 10:10:09.030639 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:10:09.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:09.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:09.034000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 15 10:10:09.034000 audit[1276]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcf0d62e0 a2=420 a3=0 items=0 ppid=1236 pid=1276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:09.034000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 15 10:10:09.035583 augenrules[1276]: No rules May 15 10:10:09.031760 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:10:09.031929 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:10:09.033307 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 10:10:09.033462 systemd[1]: Finished modprobe@drm.service. May 15 10:10:09.037955 systemd[1]: Finished audit-rules.service. May 15 10:10:09.039280 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:10:09.039441 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:10:09.041109 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:10:09.041314 systemd[1]: Finished modprobe@loop.service. May 15 10:10:09.042611 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:10:09.042702 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:10:09.043875 systemd[1]: Finished ensure-sysext.service. May 15 10:10:09.055388 systemd[1]: Started systemd-timesyncd.service. May 15 10:10:09.056296 systemd[1]: Reached target time-set.target. May 15 10:10:09.056998 systemd-timesyncd[1242]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 10:10:09.057058 systemd-timesyncd[1242]: Initial clock synchronization to Thu 2025-05-15 10:10:09.042096 UTC. May 15 10:10:09.058703 systemd-resolved[1241]: Positive Trust Anchors: May 15 10:10:09.058941 systemd-resolved[1241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 10:10:09.059016 systemd-resolved[1241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 10:10:09.075746 systemd-resolved[1241]: Defaulting to hostname 'linux'. May 15 10:10:09.079272 systemd[1]: Started systemd-resolved.service. May 15 10:10:09.080156 systemd[1]: Reached target network.target. May 15 10:10:09.080965 systemd[1]: Reached target nss-lookup.target. May 15 10:10:09.081775 systemd[1]: Reached target sysinit.target. May 15 10:10:09.082628 systemd[1]: Started motdgen.path. May 15 10:10:09.083354 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 15 10:10:09.084557 systemd[1]: Started logrotate.timer. May 15 10:10:09.085370 systemd[1]: Started mdadm.timer. May 15 10:10:09.086013 systemd[1]: Started systemd-tmpfiles-clean.timer. May 15 10:10:09.086899 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 10:10:09.086928 systemd[1]: Reached target paths.target. May 15 10:10:09.087684 systemd[1]: Reached target timers.target. May 15 10:10:09.088767 systemd[1]: Listening on dbus.socket. May 15 10:10:09.090710 systemd[1]: Starting docker.socket... May 15 10:10:09.092443 systemd[1]: Listening on sshd.socket. May 15 10:10:09.093289 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:10:09.093639 systemd[1]: Listening on docker.socket. May 15 10:10:09.094474 systemd[1]: Reached target sockets.target. May 15 10:10:09.095310 systemd[1]: Reached target basic.target. May 15 10:10:09.096231 systemd[1]: System is tainted: cgroupsv1 May 15 10:10:09.096283 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 10:10:09.096305 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 10:10:09.097383 systemd[1]: Starting containerd.service... May 15 10:10:09.099178 systemd[1]: Starting dbus.service... May 15 10:10:09.100971 systemd[1]: Starting enable-oem-cloudinit.service... May 15 10:10:09.103178 systemd[1]: Starting extend-filesystems.service... May 15 10:10:09.104072 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 15 10:10:09.105370 systemd[1]: Starting motdgen.service... May 15 10:10:09.108633 systemd[1]: Starting prepare-helm.service... May 15 10:10:09.110806 systemd[1]: Starting ssh-key-proc-cmdline.service... May 15 10:10:09.112851 systemd[1]: Starting sshd-keygen.service... May 15 10:10:09.114917 jq[1298]: false May 15 10:10:09.116644 systemd[1]: Starting systemd-logind.service... May 15 10:10:09.117627 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:10:09.117703 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 10:10:09.118782 extend-filesystems[1299]: Found loop1 May 15 10:10:09.123796 extend-filesystems[1299]: Found vda May 15 10:10:09.123796 extend-filesystems[1299]: Found vda1 May 15 10:10:09.123796 extend-filesystems[1299]: Found vda2 May 15 10:10:09.123796 extend-filesystems[1299]: Found vda3 May 15 10:10:09.123796 extend-filesystems[1299]: Found usr May 15 10:10:09.123796 extend-filesystems[1299]: Found vda4 May 15 10:10:09.123796 extend-filesystems[1299]: Found vda6 May 15 10:10:09.123796 extend-filesystems[1299]: Found vda7 May 15 10:10:09.123796 extend-filesystems[1299]: Found vda9 May 15 10:10:09.123796 extend-filesystems[1299]: Checking size of /dev/vda9 May 15 10:10:09.179457 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 10:10:09.118872 systemd[1]: Starting update-engine.service... May 15 10:10:09.153309 dbus-daemon[1297]: [system] SELinux support is enabled May 15 10:10:09.179828 extend-filesystems[1299]: Resized partition /dev/vda9 May 15 10:10:09.124095 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 15 10:10:09.181497 extend-filesystems[1338]: resize2fs 1.46.5 (30-Dec-2021) May 15 10:10:09.127329 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 10:10:09.127583 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 15 10:10:09.182912 jq[1318]: true May 15 10:10:09.128629 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 10:10:09.128873 systemd[1]: Finished ssh-key-proc-cmdline.service. May 15 10:10:09.183431 tar[1323]: linux-arm64/helm May 15 10:10:09.149884 systemd[1]: motdgen.service: Deactivated successfully. May 15 10:10:09.183780 jq[1325]: true May 15 10:10:09.150165 systemd[1]: Finished motdgen.service. May 15 10:10:09.153504 systemd[1]: Started dbus.service. May 15 10:10:09.162617 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 10:10:09.162639 systemd[1]: Reached target system-config.target. May 15 10:10:09.172365 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 10:10:09.172389 systemd[1]: Reached target user-config.target. May 15 10:10:09.204283 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 10:10:09.225965 systemd-logind[1310]: Watching system buttons on /dev/input/event0 (Power Button) May 15 10:10:09.229194 extend-filesystems[1338]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 10:10:09.229194 extend-filesystems[1338]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 10:10:09.229194 extend-filesystems[1338]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 10:10:09.243511 bash[1352]: Updated "/home/core/.ssh/authorized_keys" May 15 10:10:09.228100 systemd-logind[1310]: New seat seat0. May 15 10:10:09.243627 extend-filesystems[1299]: Resized filesystem in /dev/vda9 May 15 10:10:09.228627 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 10:10:09.234944 systemd[1]: Finished extend-filesystems.service. May 15 10:10:09.238006 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 15 10:10:09.239234 systemd[1]: Started systemd-logind.service. May 15 10:10:09.246753 env[1327]: time="2025-05-15T10:10:09.246703560Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 15 10:10:09.249865 update_engine[1312]: I0515 10:10:09.249485 1312 main.cc:92] Flatcar Update Engine starting May 15 10:10:09.251880 systemd[1]: Started update-engine.service. May 15 10:10:09.252104 update_engine[1312]: I0515 10:10:09.251922 1312 update_check_scheduler.cc:74] Next update check in 3m13s May 15 10:10:09.254482 systemd[1]: Started locksmithd.service. May 15 10:10:09.264069 env[1327]: time="2025-05-15T10:10:09.264024880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 10:10:09.264317 env[1327]: time="2025-05-15T10:10:09.264290360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 10:10:09.267386 env[1327]: time="2025-05-15T10:10:09.267350200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 10:10:09.267474 env[1327]: time="2025-05-15T10:10:09.267459840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 10:10:09.267764 env[1327]: time="2025-05-15T10:10:09.267742000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 10:10:09.267900 env[1327]: time="2025-05-15T10:10:09.267869480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 10:10:09.268046 env[1327]: time="2025-05-15T10:10:09.267960200Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 15 10:10:09.268106 env[1327]: time="2025-05-15T10:10:09.268093840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 10:10:09.268405 env[1327]: time="2025-05-15T10:10:09.268374080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 10:10:09.269022 env[1327]: time="2025-05-15T10:10:09.269001000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 10:10:09.269440 env[1327]: time="2025-05-15T10:10:09.269417040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 10:10:09.269637 env[1327]: time="2025-05-15T10:10:09.269613880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 10:10:09.269838 env[1327]: time="2025-05-15T10:10:09.269818040Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 15 10:10:09.269945 env[1327]: time="2025-05-15T10:10:09.269910640Z" level=info msg="metadata content store policy set" policy=shared May 15 10:10:09.273291 env[1327]: time="2025-05-15T10:10:09.273264720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 10:10:09.273430 env[1327]: time="2025-05-15T10:10:09.273413560Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 10:10:09.273503 env[1327]: time="2025-05-15T10:10:09.273490280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 10:10:09.273644 env[1327]: time="2025-05-15T10:10:09.273628080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 10:10:09.273720 env[1327]: time="2025-05-15T10:10:09.273696160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 10:10:09.273835 env[1327]: time="2025-05-15T10:10:09.273819520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 10:10:09.273918 env[1327]: time="2025-05-15T10:10:09.273904120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 10:10:09.274493 env[1327]: time="2025-05-15T10:10:09.274457920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 10:10:09.274583 env[1327]: time="2025-05-15T10:10:09.274568360Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 15 10:10:09.274641 env[1327]: time="2025-05-15T10:10:09.274629240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 10:10:09.274704 env[1327]: time="2025-05-15T10:10:09.274691000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 10:10:09.274758 env[1327]: time="2025-05-15T10:10:09.274746400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 10:10:09.275049 env[1327]: time="2025-05-15T10:10:09.275029800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 10:10:09.275328 env[1327]: time="2025-05-15T10:10:09.275309280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 10:10:09.275707 env[1327]: time="2025-05-15T10:10:09.275684920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 10:10:09.275791 env[1327]: time="2025-05-15T10:10:09.275776960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 10:10:09.275854 env[1327]: time="2025-05-15T10:10:09.275841120Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 10:10:09.276002 env[1327]: time="2025-05-15T10:10:09.275988000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 10:10:09.276060 env[1327]: time="2025-05-15T10:10:09.276046880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 10:10:09.276132 env[1327]: time="2025-05-15T10:10:09.276119360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 10:10:09.276187 env[1327]: time="2025-05-15T10:10:09.276175400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 10:10:09.276270 env[1327]: time="2025-05-15T10:10:09.276256560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 10:10:09.276391 env[1327]: time="2025-05-15T10:10:09.276376080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 10:10:09.276454 env[1327]: time="2025-05-15T10:10:09.276441680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 10:10:09.276508 env[1327]: time="2025-05-15T10:10:09.276495440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 10:10:09.276574 env[1327]: time="2025-05-15T10:10:09.276561400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 10:10:09.276741 env[1327]: time="2025-05-15T10:10:09.276724160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 10:10:09.276813 env[1327]: time="2025-05-15T10:10:09.276798920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 10:10:09.276878 env[1327]: time="2025-05-15T10:10:09.276865360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 10:10:09.276934 env[1327]: time="2025-05-15T10:10:09.276921760Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 10:10:09.276994 env[1327]: time="2025-05-15T10:10:09.276979040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 15 10:10:09.277046 env[1327]: time="2025-05-15T10:10:09.277033560Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 10:10:09.277105 env[1327]: time="2025-05-15T10:10:09.277092840Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 15 10:10:09.277191 env[1327]: time="2025-05-15T10:10:09.277175920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 10:10:09.277536 env[1327]: time="2025-05-15T10:10:09.277445000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 10:10:09.279869 env[1327]: time="2025-05-15T10:10:09.277919160Z" level=info msg="Connect containerd service" May 15 10:10:09.279964 env[1327]: time="2025-05-15T10:10:09.279942920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 10:10:09.280981 env[1327]: time="2025-05-15T10:10:09.280949400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 10:10:09.283422 env[1327]: time="2025-05-15T10:10:09.283383720Z" level=info msg="Start subscribing containerd event" May 15 10:10:09.283534 env[1327]: time="2025-05-15T10:10:09.283518760Z" level=info msg="Start recovering state" May 15 10:10:09.283669 env[1327]: time="2025-05-15T10:10:09.283654440Z" level=info msg="Start event monitor" May 15 10:10:09.283740 env[1327]: time="2025-05-15T10:10:09.283723680Z" level=info msg="Start snapshots syncer" May 15 10:10:09.283794 env[1327]: time="2025-05-15T10:10:09.283782120Z" level=info msg="Start cni network conf syncer for default" May 15 10:10:09.283864 env[1327]: time="2025-05-15T10:10:09.283721360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 10:10:09.283913 env[1327]: time="2025-05-15T10:10:09.283832280Z" level=info msg="Start streaming server" May 15 10:10:09.284043 env[1327]: time="2025-05-15T10:10:09.284024640Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 10:10:09.284274 env[1327]: time="2025-05-15T10:10:09.284258520Z" level=info msg="containerd successfully booted in 0.038173s" May 15 10:10:09.284371 systemd[1]: Started containerd.service. May 15 10:10:09.311318 locksmithd[1361]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 10:10:09.550720 tar[1323]: linux-arm64/LICENSE May 15 10:10:09.550819 tar[1323]: linux-arm64/README.md May 15 10:10:09.555411 systemd[1]: Finished prepare-helm.service. May 15 10:10:09.573390 systemd-networkd[1097]: eth0: Gained IPv6LL May 15 10:10:09.575054 systemd[1]: Finished systemd-networkd-wait-online.service. May 15 10:10:09.576351 systemd[1]: Reached target network-online.target. May 15 10:10:09.578776 systemd[1]: Starting kubelet.service... May 15 10:10:10.082650 systemd[1]: Started kubelet.service. May 15 10:10:10.413626 sshd_keygen[1322]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 10:10:10.431561 systemd[1]: Finished sshd-keygen.service. May 15 10:10:10.434115 systemd[1]: Starting issuegen.service... May 15 10:10:10.439035 systemd[1]: issuegen.service: Deactivated successfully. May 15 10:10:10.439276 systemd[1]: Finished issuegen.service. May 15 10:10:10.441577 systemd[1]: Starting systemd-user-sessions.service... May 15 10:10:10.449103 systemd[1]: Finished systemd-user-sessions.service. May 15 10:10:10.451572 systemd[1]: Started getty@tty1.service. May 15 10:10:10.453622 systemd[1]: Started serial-getty@ttyAMA0.service. May 15 10:10:10.454961 systemd[1]: Reached target getty.target. May 15 10:10:10.455861 systemd[1]: Reached target multi-user.target. May 15 10:10:10.458125 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 15 10:10:10.464720 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 15 10:10:10.464953 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 15 10:10:10.466158 systemd[1]: Startup finished in 4.811s (kernel) + 4.688s (userspace) = 9.499s. May 15 10:10:10.562029 kubelet[1382]: E0515 10:10:10.561972 1382 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:10:10.563732 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:10:10.563884 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:10:13.920285 systemd[1]: Created slice system-sshd.slice. May 15 10:10:13.921467 systemd[1]: Started sshd@0-10.0.0.43:22-10.0.0.1:33552.service. May 15 10:10:13.965261 sshd[1409]: Accepted publickey for core from 10.0.0.1 port 33552 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:10:13.967432 sshd[1409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:10:13.976186 systemd-logind[1310]: New session 1 of user core. May 15 10:10:13.976953 systemd[1]: Created slice user-500.slice. May 15 10:10:13.977888 systemd[1]: Starting user-runtime-dir@500.service... May 15 10:10:13.986015 systemd[1]: Finished user-runtime-dir@500.service. May 15 10:10:13.987199 systemd[1]: Starting user@500.service... May 15 10:10:13.990056 (systemd)[1414]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 10:10:14.046686 systemd[1414]: Queued start job for default target default.target. May 15 10:10:14.046885 systemd[1414]: Reached target paths.target. May 15 10:10:14.046901 systemd[1414]: Reached target sockets.target. May 15 10:10:14.046912 systemd[1414]: Reached target timers.target. May 15 10:10:14.046921 systemd[1414]: Reached target basic.target. May 15 10:10:14.046975 systemd[1414]: Reached target default.target. May 15 10:10:14.046996 systemd[1414]: Startup finished in 51ms. May 15 10:10:14.047242 systemd[1]: Started user@500.service. May 15 10:10:14.048595 systemd[1]: Started session-1.scope. May 15 10:10:14.097861 systemd[1]: Started sshd@1-10.0.0.43:22-10.0.0.1:33554.service. May 15 10:10:14.135637 sshd[1423]: Accepted publickey for core from 10.0.0.1 port 33554 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:10:14.136838 sshd[1423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:10:14.140267 systemd-logind[1310]: New session 2 of user core. May 15 10:10:14.141045 systemd[1]: Started session-2.scope. May 15 10:10:14.193542 sshd[1423]: pam_unix(sshd:session): session closed for user core May 15 10:10:14.195841 systemd[1]: Started sshd@2-10.0.0.43:22-10.0.0.1:33564.service. May 15 10:10:14.196899 systemd[1]: sshd@1-10.0.0.43:22-10.0.0.1:33554.service: Deactivated successfully. May 15 10:10:14.197853 systemd[1]: session-2.scope: Deactivated successfully. May 15 10:10:14.197872 systemd-logind[1310]: Session 2 logged out. Waiting for processes to exit. May 15 10:10:14.198781 systemd-logind[1310]: Removed session 2. May 15 10:10:14.234095 sshd[1428]: Accepted publickey for core from 10.0.0.1 port 33564 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:10:14.235287 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:10:14.238882 systemd-logind[1310]: New session 3 of user core. May 15 10:10:14.239461 systemd[1]: Started session-3.scope. May 15 10:10:14.288286 sshd[1428]: pam_unix(sshd:session): session closed for user core May 15 10:10:14.290638 systemd[1]: Started sshd@3-10.0.0.43:22-10.0.0.1:33568.service. May 15 10:10:14.291258 systemd[1]: sshd@2-10.0.0.43:22-10.0.0.1:33564.service: Deactivated successfully. May 15 10:10:14.292204 systemd-logind[1310]: Session 3 logged out. Waiting for processes to exit. May 15 10:10:14.292273 systemd[1]: session-3.scope: Deactivated successfully. May 15 10:10:14.292990 systemd-logind[1310]: Removed session 3. May 15 10:10:14.330058 sshd[1435]: Accepted publickey for core from 10.0.0.1 port 33568 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:10:14.331189 sshd[1435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:10:14.334388 systemd-logind[1310]: New session 4 of user core. May 15 10:10:14.335206 systemd[1]: Started session-4.scope. May 15 10:10:14.388351 sshd[1435]: pam_unix(sshd:session): session closed for user core May 15 10:10:14.390412 systemd[1]: Started sshd@4-10.0.0.43:22-10.0.0.1:33570.service. May 15 10:10:14.390857 systemd[1]: sshd@3-10.0.0.43:22-10.0.0.1:33568.service: Deactivated successfully. May 15 10:10:14.391825 systemd[1]: session-4.scope: Deactivated successfully. May 15 10:10:14.391855 systemd-logind[1310]: Session 4 logged out. Waiting for processes to exit. May 15 10:10:14.392946 systemd-logind[1310]: Removed session 4. May 15 10:10:14.427690 sshd[1442]: Accepted publickey for core from 10.0.0.1 port 33570 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:10:14.428787 sshd[1442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:10:14.432005 systemd-logind[1310]: New session 5 of user core. May 15 10:10:14.432353 systemd[1]: Started session-5.scope. May 15 10:10:14.489270 sudo[1448]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 10:10:14.489477 sudo[1448]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 10:10:14.503468 dbus-daemon[1297]: avc: received setenforce notice (enforcing=1) May 15 10:10:14.505236 sudo[1448]: pam_unix(sudo:session): session closed for user root May 15 10:10:14.507036 sshd[1442]: pam_unix(sshd:session): session closed for user core May 15 10:10:14.509164 systemd[1]: Started sshd@5-10.0.0.43:22-10.0.0.1:33574.service. May 15 10:10:14.510234 systemd[1]: sshd@4-10.0.0.43:22-10.0.0.1:33570.service: Deactivated successfully. May 15 10:10:14.511092 systemd-logind[1310]: Session 5 logged out. Waiting for processes to exit. May 15 10:10:14.511290 systemd[1]: session-5.scope: Deactivated successfully. May 15 10:10:14.512008 systemd-logind[1310]: Removed session 5. May 15 10:10:14.546808 sshd[1450]: Accepted publickey for core from 10.0.0.1 port 33574 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:10:14.547925 sshd[1450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:10:14.550914 systemd-logind[1310]: New session 6 of user core. May 15 10:10:14.551772 systemd[1]: Started session-6.scope. May 15 10:10:14.603404 sudo[1457]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 10:10:14.603876 sudo[1457]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 10:10:14.606416 sudo[1457]: pam_unix(sudo:session): session closed for user root May 15 10:10:14.610481 sudo[1456]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 15 10:10:14.610692 sudo[1456]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 10:10:14.618544 systemd[1]: Stopping audit-rules.service... May 15 10:10:14.619000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 15 10:10:14.620396 auditctl[1460]: No rules May 15 10:10:14.620621 kernel: kauditd_printk_skb: 70 callbacks suppressed May 15 10:10:14.620652 kernel: audit: type=1305 audit(1747303814.619:152): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 15 10:10:14.619000 audit[1460]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd923e070 a2=420 a3=0 items=0 ppid=1 pid=1460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:14.620873 systemd[1]: audit-rules.service: Deactivated successfully. May 15 10:10:14.621072 systemd[1]: Stopped audit-rules.service. May 15 10:10:14.622444 systemd[1]: Starting audit-rules.service... May 15 10:10:14.625992 kernel: audit: type=1300 audit(1747303814.619:152): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd923e070 a2=420 a3=0 items=0 ppid=1 pid=1460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:14.626042 kernel: audit: type=1327 audit(1747303814.619:152): proctitle=2F7362696E2F617564697463746C002D44 May 15 10:10:14.619000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 May 15 10:10:14.626969 kernel: audit: type=1131 audit(1747303814.620:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:14.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:14.639412 augenrules[1478]: No rules May 15 10:10:14.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:14.640057 systemd[1]: Finished audit-rules.service. May 15 10:10:14.641771 sudo[1456]: pam_unix(sudo:session): session closed for user root May 15 10:10:14.641000 audit[1456]: USER_END pid=1456 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 10:10:14.643093 sshd[1450]: pam_unix(sshd:session): session closed for user core May 15 10:10:14.645196 systemd[1]: Started sshd@6-10.0.0.43:22-10.0.0.1:33590.service. May 15 10:10:14.646073 kernel: audit: type=1130 audit(1747303814.640:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:14.646118 kernel: audit: type=1106 audit(1747303814.641:155): pid=1456 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 10:10:14.646134 kernel: audit: type=1104 audit(1747303814.641:156): pid=1456 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 10:10:14.641000 audit[1456]: CRED_DISP pid=1456 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 10:10:14.646539 systemd[1]: sshd@5-10.0.0.43:22-10.0.0.1:33574.service: Deactivated successfully. May 15 10:10:14.648279 systemd-logind[1310]: Session 6 logged out. Waiting for processes to exit. May 15 10:10:14.648425 systemd[1]: session-6.scope: Deactivated successfully. May 15 10:10:14.645000 audit[1450]: USER_END pid=1450 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:10:14.649236 systemd-logind[1310]: Removed session 6. May 15 10:10:14.652253 kernel: audit: type=1106 audit(1747303814.645:157): pid=1450 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:10:14.652309 kernel: audit: type=1130 audit(1747303814.645:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.43:22-10.0.0.1:33590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:14.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.43:22-10.0.0.1:33590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:14.645000 audit[1450]: CRED_DISP pid=1450 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:10:14.657761 kernel: audit: type=1104 audit(1747303814.645:159): pid=1450 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:10:14.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.43:22-10.0.0.1:33574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:14.685000 audit[1483]: USER_ACCT pid=1483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:10:14.685745 sshd[1483]: Accepted publickey for core from 10.0.0.1 port 33590 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:10:14.686000 audit[1483]: CRED_ACQ pid=1483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:10:14.686000 audit[1483]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe5785100 a2=3 a3=1 items=0 ppid=1 pid=1483 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:14.686000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:10:14.687118 sshd[1483]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:10:14.690854 systemd[1]: Started session-7.scope. May 15 10:10:14.690913 systemd-logind[1310]: New session 7 of user core. May 15 10:10:14.694000 audit[1483]: USER_START pid=1483 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:10:14.695000 audit[1488]: CRED_ACQ pid=1488 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:10:14.742000 audit[1489]: USER_ACCT pid=1489 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 10:10:14.742578 sudo[1489]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 10:10:14.742000 audit[1489]: CRED_REFR pid=1489 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 10:10:14.743419 sudo[1489]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 10:10:14.745000 audit[1489]: USER_START pid=1489 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 10:10:14.802156 systemd[1]: Starting docker.service... May 15 10:10:14.880972 env[1501]: time="2025-05-15T10:10:14.880919098Z" level=info msg="Starting up" May 15 10:10:14.882586 env[1501]: time="2025-05-15T10:10:14.882559643Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 10:10:14.882586 env[1501]: time="2025-05-15T10:10:14.882581907Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 10:10:14.882671 env[1501]: time="2025-05-15T10:10:14.882602652Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 10:10:14.882671 env[1501]: time="2025-05-15T10:10:14.882613565Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 10:10:14.886364 env[1501]: time="2025-05-15T10:10:14.886273424Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 10:10:14.886364 env[1501]: time="2025-05-15T10:10:14.886295448Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 10:10:14.886364 env[1501]: time="2025-05-15T10:10:14.886311157Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 10:10:14.886364 env[1501]: time="2025-05-15T10:10:14.886329703Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 10:10:15.058496 env[1501]: time="2025-05-15T10:10:15.058397495Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 15 10:10:15.058496 env[1501]: time="2025-05-15T10:10:15.058424676Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 15 10:10:15.058681 env[1501]: time="2025-05-15T10:10:15.058554469Z" level=info msg="Loading containers: start." May 15 10:10:15.104000 audit[1535]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.104000 audit[1535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe5263370 a2=0 a3=1 items=0 ppid=1501 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.104000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 15 10:10:15.106000 audit[1537]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.106000 audit[1537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffcaad27e0 a2=0 a3=1 items=0 ppid=1501 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.106000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 May 15 10:10:15.107000 audit[1539]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.107000 audit[1539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd83323a0 a2=0 a3=1 items=0 ppid=1501 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.107000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 15 10:10:15.109000 audit[1541]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.109000 audit[1541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffef5c5250 a2=0 a3=1 items=0 ppid=1501 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.109000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 15 10:10:15.113000 audit[1543]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.113000 audit[1543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffda1b200 a2=0 a3=1 items=0 ppid=1501 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.113000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E May 15 10:10:15.143000 audit[1548]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.143000 audit[1548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff25553e0 a2=0 a3=1 items=0 ppid=1501 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.143000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E May 15 10:10:15.148000 audit[1550]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.148000 audit[1550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc9ed22a0 a2=0 a3=1 items=0 ppid=1501 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.148000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 May 15 10:10:15.150000 audit[1552]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.150000 audit[1552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe878d200 a2=0 a3=1 items=0 ppid=1501 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.150000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E May 15 10:10:15.152000 audit[1554]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.152000 audit[1554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffc8fb0f40 a2=0 a3=1 items=0 ppid=1501 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.152000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 15 10:10:15.159000 audit[1558]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1558 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.159000 audit[1558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc3cb79e0 a2=0 a3=1 items=0 ppid=1501 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.159000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 15 10:10:15.170000 audit[1559]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.170000 audit[1559]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe1c4cd00 a2=0 a3=1 items=0 ppid=1501 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.170000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 15 10:10:15.180266 kernel: Initializing XFRM netlink socket May 15 10:10:15.202609 env[1501]: time="2025-05-15T10:10:15.202572947Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 15 10:10:15.215000 audit[1567]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.215000 audit[1567]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=fffffcb16190 a2=0 a3=1 items=0 ppid=1501 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.215000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 May 15 10:10:15.232000 audit[1570]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.232000 audit[1570]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffe2ec7300 a2=0 a3=1 items=0 ppid=1501 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.232000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E May 15 10:10:15.234000 audit[1573]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.234000 audit[1573]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffdd627d90 a2=0 a3=1 items=0 ppid=1501 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.234000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 May 15 10:10:15.236000 audit[1575]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.236000 audit[1575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd4243a80 a2=0 a3=1 items=0 ppid=1501 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.236000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 May 15 10:10:15.238000 audit[1577]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.238000 audit[1577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffc5570340 a2=0 a3=1 items=0 ppid=1501 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.238000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 May 15 10:10:15.241000 audit[1579]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.241000 audit[1579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffe077c690 a2=0 a3=1 items=0 ppid=1501 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.241000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 May 15 10:10:15.243000 audit[1581]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.243000 audit[1581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffea26c850 a2=0 a3=1 items=0 ppid=1501 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.243000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 May 15 10:10:15.249000 audit[1584]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.249000 audit[1584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=fffff7374320 a2=0 a3=1 items=0 ppid=1501 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.249000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 May 15 10:10:15.250000 audit[1586]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.250000 audit[1586]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffcc0f7320 a2=0 a3=1 items=0 ppid=1501 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.250000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 15 10:10:15.252000 audit[1588]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1588 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.252000 audit[1588]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=fffffa4474f0 a2=0 a3=1 items=0 ppid=1501 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.252000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 15 10:10:15.254000 audit[1590]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.254000 audit[1590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc23e0200 a2=0 a3=1 items=0 ppid=1501 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.254000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 May 15 10:10:15.255831 systemd-networkd[1097]: docker0: Link UP May 15 10:10:15.261000 audit[1594]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.261000 audit[1594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff02bd120 a2=0 a3=1 items=0 ppid=1501 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.261000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 15 10:10:15.271000 audit[1595]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1595 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:15.271000 audit[1595]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffa3aa460 a2=0 a3=1 items=0 ppid=1501 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:15.271000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 15 10:10:15.273425 env[1501]: time="2025-05-15T10:10:15.273387488Z" level=info msg="Loading containers: done." May 15 10:10:15.289991 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1478764440-merged.mount: Deactivated successfully. May 15 10:10:15.292851 env[1501]: time="2025-05-15T10:10:15.292808051Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 10:10:15.293018 env[1501]: time="2025-05-15T10:10:15.292999442Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 15 10:10:15.293138 env[1501]: time="2025-05-15T10:10:15.293119402Z" level=info msg="Daemon has completed initialization" May 15 10:10:15.305921 systemd[1]: Started docker.service. May 15 10:10:15.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:15.313370 env[1501]: time="2025-05-15T10:10:15.313274791Z" level=info msg="API listen on /run/docker.sock" May 15 10:10:16.040053 env[1327]: time="2025-05-15T10:10:16.039998633Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 15 10:10:16.597743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866210458.mount: Deactivated successfully. May 15 10:10:17.950812 env[1327]: time="2025-05-15T10:10:17.950759598Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:17.952101 env[1327]: time="2025-05-15T10:10:17.952066187Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:17.954169 env[1327]: time="2025-05-15T10:10:17.954142443Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:17.955690 env[1327]: time="2025-05-15T10:10:17.955661186Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:17.957139 env[1327]: time="2025-05-15T10:10:17.957097939Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 15 10:10:17.966117 env[1327]: time="2025-05-15T10:10:17.965976341Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 15 10:10:19.552568 env[1327]: time="2025-05-15T10:10:19.552521386Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:19.553929 env[1327]: time="2025-05-15T10:10:19.553898992Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:19.555801 env[1327]: time="2025-05-15T10:10:19.555775579Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:19.557395 env[1327]: time="2025-05-15T10:10:19.557369912Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:19.558949 env[1327]: time="2025-05-15T10:10:19.558922187Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 15 10:10:19.567477 env[1327]: time="2025-05-15T10:10:19.567450845Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 15 10:10:20.715927 env[1327]: time="2025-05-15T10:10:20.715869034Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:20.721455 env[1327]: time="2025-05-15T10:10:20.721424493Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:20.723045 env[1327]: time="2025-05-15T10:10:20.723009763Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:20.725554 env[1327]: time="2025-05-15T10:10:20.725519823Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:20.726246 env[1327]: time="2025-05-15T10:10:20.726199292Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 15 10:10:20.735225 env[1327]: time="2025-05-15T10:10:20.735186724Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 15 10:10:20.814663 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 10:10:20.815575 kernel: kauditd_printk_skb: 84 callbacks suppressed May 15 10:10:20.815613 kernel: audit: type=1130 audit(1747303820.813:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:20.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:20.814835 systemd[1]: Stopped kubelet.service. May 15 10:10:20.816753 systemd[1]: Starting kubelet.service... May 15 10:10:20.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:20.820687 kernel: audit: type=1131 audit(1747303820.813:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:20.900419 systemd[1]: Started kubelet.service. May 15 10:10:20.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:20.904279 kernel: audit: type=1130 audit(1747303820.899:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:20.959416 kubelet[1666]: E0515 10:10:20.959367 1666 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:10:20.962136 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:10:20.962293 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:10:20.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 15 10:10:20.965247 kernel: audit: type=1131 audit(1747303820.961:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 15 10:10:21.826838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2601548148.mount: Deactivated successfully. May 15 10:10:22.373632 env[1327]: time="2025-05-15T10:10:22.373580570Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:22.374907 env[1327]: time="2025-05-15T10:10:22.374870219Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:22.376609 env[1327]: time="2025-05-15T10:10:22.376587005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:22.377535 env[1327]: time="2025-05-15T10:10:22.377498096Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:22.378485 env[1327]: time="2025-05-15T10:10:22.378451289Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 15 10:10:22.387350 env[1327]: time="2025-05-15T10:10:22.387316702Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 10:10:22.935394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount832427576.mount: Deactivated successfully. May 15 10:10:23.765991 env[1327]: time="2025-05-15T10:10:23.765933156Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:23.767559 env[1327]: time="2025-05-15T10:10:23.767523520Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:23.769240 env[1327]: time="2025-05-15T10:10:23.769199848Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:23.772799 env[1327]: time="2025-05-15T10:10:23.772768139Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:23.773495 env[1327]: time="2025-05-15T10:10:23.773453705Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 15 10:10:23.782459 env[1327]: time="2025-05-15T10:10:23.782429150Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 15 10:10:24.239238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1345905237.mount: Deactivated successfully. May 15 10:10:24.242846 env[1327]: time="2025-05-15T10:10:24.242796730Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:24.244881 env[1327]: time="2025-05-15T10:10:24.244851278Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:24.246268 env[1327]: time="2025-05-15T10:10:24.246234199Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:24.247929 env[1327]: time="2025-05-15T10:10:24.247886939Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:24.248473 env[1327]: time="2025-05-15T10:10:24.248417100Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 15 10:10:24.258113 env[1327]: time="2025-05-15T10:10:24.258083191Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 15 10:10:24.814555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount151411527.mount: Deactivated successfully. May 15 10:10:26.903578 env[1327]: time="2025-05-15T10:10:26.903503981Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:26.905810 env[1327]: time="2025-05-15T10:10:26.905781669Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:26.907704 env[1327]: time="2025-05-15T10:10:26.907655171Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:26.910199 env[1327]: time="2025-05-15T10:10:26.910142670Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:26.910980 env[1327]: time="2025-05-15T10:10:26.910942966Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 15 10:10:31.213116 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 10:10:31.213312 systemd[1]: Stopped kubelet.service. May 15 10:10:31.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:31.214760 systemd[1]: Starting kubelet.service... May 15 10:10:31.216225 kernel: audit: type=1130 audit(1747303831.212:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:31.216284 kernel: audit: type=1131 audit(1747303831.212:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:31.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:31.293931 systemd[1]: Started kubelet.service. May 15 10:10:31.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:31.297271 kernel: audit: type=1130 audit(1747303831.292:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:31.331192 kubelet[1776]: E0515 10:10:31.331147 1776 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:10:31.333947 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:10:31.334085 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:10:31.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 15 10:10:31.337298 kernel: audit: type=1131 audit(1747303831.333:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 15 10:10:33.171746 systemd[1]: Stopped kubelet.service. May 15 10:10:33.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:33.173705 systemd[1]: Starting kubelet.service... May 15 10:10:33.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:33.177143 kernel: audit: type=1130 audit(1747303833.170:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:33.177188 kernel: audit: type=1131 audit(1747303833.170:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:33.190978 systemd[1]: Reloading. May 15 10:10:33.239575 /usr/lib/systemd/system-generators/torcx-generator[1812]: time="2025-05-15T10:10:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:10:33.239605 /usr/lib/systemd/system-generators/torcx-generator[1812]: time="2025-05-15T10:10:33Z" level=info msg="torcx already run" May 15 10:10:33.332494 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:10:33.332512 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:10:33.349024 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:10:33.404621 systemd[1]: Started kubelet.service. May 15 10:10:33.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:33.407421 systemd[1]: Stopping kubelet.service... May 15 10:10:33.408057 systemd[1]: kubelet.service: Deactivated successfully. May 15 10:10:33.408241 kernel: audit: type=1130 audit(1747303833.404:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:33.408484 systemd[1]: Stopped kubelet.service. May 15 10:10:33.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:33.410730 systemd[1]: Starting kubelet.service... May 15 10:10:33.411267 kernel: audit: type=1131 audit(1747303833.407:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:33.487427 systemd[1]: Started kubelet.service. May 15 10:10:33.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:33.491235 kernel: audit: type=1130 audit(1747303833.486:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:33.528119 kubelet[1871]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:10:33.528119 kubelet[1871]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 10:10:33.528119 kubelet[1871]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:10:33.528482 kubelet[1871]: I0515 10:10:33.528256 1871 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 10:10:34.293463 kubelet[1871]: I0515 10:10:34.293421 1871 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 10:10:34.293463 kubelet[1871]: I0515 10:10:34.293450 1871 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 10:10:34.293672 kubelet[1871]: I0515 10:10:34.293648 1871 server.go:927] "Client rotation is on, will bootstrap in background" May 15 10:10:34.322507 kubelet[1871]: E0515 10:10:34.322445 1871 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.43:6443: connect: connection refused May 15 10:10:34.322626 kubelet[1871]: I0515 10:10:34.322605 1871 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:10:34.329603 kubelet[1871]: I0515 10:10:34.329575 1871 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 10:10:34.330859 kubelet[1871]: I0515 10:10:34.330820 1871 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 10:10:34.331012 kubelet[1871]: I0515 10:10:34.330858 1871 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 10:10:34.331108 kubelet[1871]: I0515 10:10:34.331075 1871 topology_manager.go:138] "Creating topology manager with none policy" May 15 10:10:34.331108 kubelet[1871]: I0515 10:10:34.331084 1871 container_manager_linux.go:301] "Creating device plugin manager" May 15 10:10:34.331284 kubelet[1871]: I0515 10:10:34.331270 1871 state_mem.go:36] "Initialized new in-memory state store" May 15 10:10:34.332128 kubelet[1871]: I0515 10:10:34.332108 1871 kubelet.go:400] "Attempting to sync node with API server" May 15 10:10:34.332128 kubelet[1871]: I0515 10:10:34.332127 1871 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 10:10:34.332234 kubelet[1871]: I0515 10:10:34.332210 1871 kubelet.go:312] "Adding apiserver pod source" May 15 10:10:34.332420 kubelet[1871]: I0515 10:10:34.332407 1871 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 10:10:34.332945 kubelet[1871]: W0515 10:10:34.332904 1871 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 15 10:10:34.333051 kubelet[1871]: E0515 10:10:34.333039 1871 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 15 10:10:34.333151 kubelet[1871]: W0515 10:10:34.332994 1871 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 15 10:10:34.333315 kubelet[1871]: E0515 10:10:34.333301 1871 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 15 10:10:34.333446 kubelet[1871]: I0515 10:10:34.333415 1871 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 10:10:34.333780 kubelet[1871]: I0515 10:10:34.333769 1871 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 10:10:34.334210 kubelet[1871]: W0515 10:10:34.334190 1871 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 10:10:34.334997 kubelet[1871]: I0515 10:10:34.334971 1871 server.go:1264] "Started kubelet" May 15 10:10:34.335000 audit[1871]: AVC avc: denied { mac_admin } for pid=1871 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:10:34.337294 kubelet[1871]: I0515 10:10:34.337271 1871 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 15 10:10:34.337386 kubelet[1871]: I0515 10:10:34.337372 1871 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 15 10:10:34.337589 kubelet[1871]: I0515 10:10:34.337575 1871 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 10:10:34.338524 kubelet[1871]: I0515 10:10:34.338506 1871 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 10:10:34.338830 kubelet[1871]: I0515 10:10:34.338815 1871 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 10:10:34.335000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 10:10:34.335000 audit[1871]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c80ff0 a1=4000d56198 a2=4000c80fc0 a3=25 items=0 ppid=1 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:34.335000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 10:10:34.336000 audit[1871]: AVC avc: denied { mac_admin } for pid=1871 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:10:34.336000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 10:10:34.336000 audit[1871]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40004f3400 a1=4000d561b0 a2=4000c81080 a3=25 items=0 ppid=1 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:34.336000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 10:10:34.340233 kernel: audit: type=1400 audit(1747303834.335:207): avc: denied { mac_admin } for pid=1871 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:10:34.346645 kubelet[1871]: I0515 10:10:34.346625 1871 reconciler.go:26] "Reconciler: start to sync state" May 15 10:10:34.347462 kubelet[1871]: I0515 10:10:34.347326 1871 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 10:10:34.348180 kubelet[1871]: I0515 10:10:34.348049 1871 factory.go:221] Registration of the systemd container factory successfully May 15 10:10:34.348180 kubelet[1871]: I0515 10:10:34.348134 1871 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 10:10:34.349159 kubelet[1871]: I0515 10:10:34.349132 1871 server.go:455] "Adding debug handlers to kubelet server" May 15 10:10:34.350045 kubelet[1871]: I0515 10:10:34.349993 1871 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 10:10:34.350201 kubelet[1871]: I0515 10:10:34.350185 1871 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 10:10:34.350345 kubelet[1871]: E0515 10:10:34.350318 1871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="200ms" May 15 10:10:34.350466 kubelet[1871]: W0515 10:10:34.350423 1871 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 15 10:10:34.350518 kubelet[1871]: E0515 10:10:34.350471 1871 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 15 10:10:34.350000 audit[1883]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1883 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:34.350000 audit[1883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffabab440 a2=0 a3=1 items=0 ppid=1871 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:34.350000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 15 10:10:34.352050 kubelet[1871]: E0515 10:10:34.352024 1871 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 10:10:34.351000 audit[1884]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1884 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:34.351000 audit[1884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5815bd0 a2=0 a3=1 items=0 ppid=1871 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:34.351000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 15 10:10:34.353160 kubelet[1871]: E0515 10:10:34.349220 1871 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.43:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fab9468366a0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 10:10:34.334947853 +0000 UTC m=+0.844227618,LastTimestamp:2025-05-15 10:10:34.334947853 +0000 UTC m=+0.844227618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 10:10:34.354176 kubelet[1871]: I0515 10:10:34.354154 1871 factory.go:221] Registration of the containerd container factory successfully May 15 10:10:34.353000 audit[1886]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:34.353000 audit[1886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe5836b40 a2=0 a3=1 items=0 ppid=1871 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:34.353000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 15 10:10:34.355000 audit[1888]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:34.355000 audit[1888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffae16340 a2=0 a3=1 items=0 ppid=1871 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:34.355000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 15 10:10:34.361000 audit[1891]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1891 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:34.361000 audit[1891]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffc3b6cb00 a2=0 a3=1 items=0 ppid=1871 pid=1891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:34.361000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 May 15 10:10:34.363457 kubelet[1871]: I0515 10:10:34.363432 1871 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 10:10:34.362000 audit[1893]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1893 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:34.362000 audit[1893]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffea177b50 a2=0 a3=1 items=0 ppid=1871 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:34.362000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 15 10:10:34.364375 kubelet[1871]: I0515 10:10:34.364322 1871 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 10:10:34.364474 kubelet[1871]: I0515 10:10:34.364463 1871 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 10:10:34.364498 kubelet[1871]: I0515 10:10:34.364483 1871 kubelet.go:2337] "Starting kubelet main sync loop" May 15 10:10:34.364553 kubelet[1871]: E0515 10:10:34.364526 1871 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 10:10:34.364000 audit[1894]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1894 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:34.364000 audit[1894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe77fc370 a2=0 a3=1 items=0 ppid=1871 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:34.364000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 15 10:10:34.365000 audit[1895]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1895 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:34.365000 audit[1895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffedccda20 a2=0 a3=1 items=0 ppid=1871 pid=1895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:34.365000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 15 10:10:34.366000 audit[1896]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1896 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:34.366000 audit[1896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc6053a90 a2=0 a3=1 items=0 ppid=1871 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:34.366000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 15 10:10:34.366000 audit[1897]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1897 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:34.366000 audit[1897]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd61112d0 a2=0 a3=1 items=0 ppid=1871 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:34.366000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 15 10:10:34.367000 audit[1898]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1898 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:34.367000 audit[1898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffcf138650 a2=0 a3=1 items=0 ppid=1871 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:34.367000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 15 10:10:34.369000 audit[1900]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1900 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:34.369000 audit[1900]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd2989ad0 a2=0 a3=1 items=0 ppid=1871 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:34.369000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 15 10:10:34.371192 kubelet[1871]: W0515 10:10:34.371154 1871 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 15 10:10:34.371281 kubelet[1871]: E0515 10:10:34.371201 1871 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 15 10:10:34.372059 kubelet[1871]: I0515 10:10:34.372042 1871 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 10:10:34.372110 kubelet[1871]: I0515 10:10:34.372075 1871 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 10:10:34.372110 kubelet[1871]: I0515 10:10:34.372093 1871 state_mem.go:36] "Initialized new in-memory state store" May 15 10:10:34.437693 kubelet[1871]: I0515 10:10:34.437649 1871 policy_none.go:49] "None policy: Start" May 15 10:10:34.438421 kubelet[1871]: I0515 10:10:34.438404 1871 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 10:10:34.438492 kubelet[1871]: I0515 10:10:34.438430 1871 state_mem.go:35] "Initializing new in-memory state store" May 15 10:10:34.440319 kubelet[1871]: I0515 10:10:34.440279 1871 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:10:34.441700 kubelet[1871]: E0515 10:10:34.441665 1871 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" May 15 10:10:34.442321 kubelet[1871]: I0515 10:10:34.442294 1871 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 10:10:34.441000 audit[1871]: AVC avc: denied { mac_admin } for pid=1871 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:10:34.441000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 10:10:34.441000 audit[1871]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000dcc5a0 a1=4000ba1290 a2=4000dcc570 a3=25 items=0 ppid=1 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:34.441000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 10:10:34.442503 kubelet[1871]: I0515 10:10:34.442355 1871 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 15 10:10:34.442503 kubelet[1871]: I0515 10:10:34.442472 1871 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 10:10:34.442592 kubelet[1871]: I0515 10:10:34.442578 1871 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 10:10:34.443851 kubelet[1871]: E0515 10:10:34.443809 1871 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 10:10:34.465335 kubelet[1871]: I0515 10:10:34.465296 1871 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 10:10:34.466189 kubelet[1871]: I0515 10:10:34.466160 1871 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 10:10:34.469605 kubelet[1871]: I0515 10:10:34.469578 1871 topology_manager.go:215] "Topology Admit Handler" podUID="89d15ac7b96673a0872356cf715a3cbb" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 10:10:34.548351 kubelet[1871]: I0515 10:10:34.547672 1871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 10:10:34.548351 kubelet[1871]: I0515 10:10:34.547707 1871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89d15ac7b96673a0872356cf715a3cbb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"89d15ac7b96673a0872356cf715a3cbb\") " pod="kube-system/kube-apiserver-localhost" May 15 10:10:34.548351 kubelet[1871]: I0515 10:10:34.547729 1871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89d15ac7b96673a0872356cf715a3cbb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"89d15ac7b96673a0872356cf715a3cbb\") " pod="kube-system/kube-apiserver-localhost" May 15 10:10:34.548351 kubelet[1871]: I0515 10:10:34.547760 1871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:10:34.548351 kubelet[1871]: I0515 10:10:34.547778 1871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:10:34.549764 kubelet[1871]: I0515 10:10:34.547794 1871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:10:34.549764 kubelet[1871]: I0515 10:10:34.547809 1871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89d15ac7b96673a0872356cf715a3cbb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"89d15ac7b96673a0872356cf715a3cbb\") " pod="kube-system/kube-apiserver-localhost" May 15 10:10:34.549764 kubelet[1871]: I0515 10:10:34.547843 1871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:10:34.549764 kubelet[1871]: I0515 10:10:34.547858 1871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:10:34.551021 kubelet[1871]: E0515 10:10:34.550988 1871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="400ms" May 15 10:10:34.643289 kubelet[1871]: I0515 10:10:34.643263 1871 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:10:34.643575 kubelet[1871]: E0515 10:10:34.643551 1871 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" May 15 10:10:34.771375 kubelet[1871]: E0515 10:10:34.771351 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:34.772006 env[1327]: time="2025-05-15T10:10:34.771962379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 15 10:10:34.775647 kubelet[1871]: E0515 10:10:34.775621 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:34.775991 env[1327]: time="2025-05-15T10:10:34.775964311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 15 10:10:34.776371 kubelet[1871]: E0515 10:10:34.776348 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:34.776680 env[1327]: time="2025-05-15T10:10:34.776653295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:89d15ac7b96673a0872356cf715a3cbb,Namespace:kube-system,Attempt:0,}" May 15 10:10:34.951527 kubelet[1871]: E0515 10:10:34.951386 1871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="800ms" May 15 10:10:35.044520 kubelet[1871]: I0515 10:10:35.044495 1871 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:10:35.044817 kubelet[1871]: E0515 10:10:35.044778 1871 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" May 15 10:10:35.212894 kubelet[1871]: W0515 10:10:35.212770 1871 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 15 10:10:35.212894 kubelet[1871]: E0515 10:10:35.212834 1871 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 15 10:10:35.282744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1490220887.mount: Deactivated successfully. May 15 10:10:35.288127 env[1327]: time="2025-05-15T10:10:35.288085982Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:35.289061 env[1327]: time="2025-05-15T10:10:35.289032207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:35.290105 env[1327]: time="2025-05-15T10:10:35.290076814Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:35.291841 env[1327]: time="2025-05-15T10:10:35.291806295Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:35.293447 env[1327]: time="2025-05-15T10:10:35.293424516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:35.295636 env[1327]: time="2025-05-15T10:10:35.295605914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:35.298612 env[1327]: time="2025-05-15T10:10:35.298586244Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:35.301664 env[1327]: time="2025-05-15T10:10:35.301637920Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:35.302621 env[1327]: time="2025-05-15T10:10:35.302599263Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:35.303327 env[1327]: time="2025-05-15T10:10:35.303301413Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:35.303956 env[1327]: time="2025-05-15T10:10:35.303934297Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:35.304576 env[1327]: time="2025-05-15T10:10:35.304557422Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:35.334800 env[1327]: time="2025-05-15T10:10:35.334543207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:10:35.334800 env[1327]: time="2025-05-15T10:10:35.334594998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:10:35.334800 env[1327]: time="2025-05-15T10:10:35.334605516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:10:35.335012 env[1327]: time="2025-05-15T10:10:35.334783883Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ecd9304dc26d2ab17afc3a098d31c6b34d2b04dbcca979f43514ff3426495122 pid=1926 runtime=io.containerd.runc.v2 May 15 10:10:35.335279 env[1327]: time="2025-05-15T10:10:35.335194847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:10:35.335279 env[1327]: time="2025-05-15T10:10:35.335237839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:10:35.335279 env[1327]: time="2025-05-15T10:10:35.335248797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:10:35.335422 env[1327]: time="2025-05-15T10:10:35.335382532Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e3ea2e273f71cf8123de2b7282deb194847ddf4efc346fb7120d55a7fe3db8d pid=1931 runtime=io.containerd.runc.v2 May 15 10:10:35.337497 env[1327]: time="2025-05-15T10:10:35.337426395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:10:35.337497 env[1327]: time="2025-05-15T10:10:35.337462748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:10:35.337497 env[1327]: time="2025-05-15T10:10:35.337472947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:10:35.337621 env[1327]: time="2025-05-15T10:10:35.337588685Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3eec7c8829197081c153b3de5187cb7a90776c63d33e58f45fed321d0f189390 pid=1929 runtime=io.containerd.runc.v2 May 15 10:10:35.410390 env[1327]: time="2025-05-15T10:10:35.410337419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:89d15ac7b96673a0872356cf715a3cbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"3eec7c8829197081c153b3de5187cb7a90776c63d33e58f45fed321d0f189390\"" May 15 10:10:35.411323 kubelet[1871]: E0515 10:10:35.411298 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:35.413868 env[1327]: time="2025-05-15T10:10:35.413821855Z" level=info msg="CreateContainer within sandbox \"3eec7c8829197081c153b3de5187cb7a90776c63d33e58f45fed321d0f189390\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 10:10:35.419330 env[1327]: time="2025-05-15T10:10:35.419286607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecd9304dc26d2ab17afc3a098d31c6b34d2b04dbcca979f43514ff3426495122\"" May 15 10:10:35.419714 env[1327]: time="2025-05-15T10:10:35.419687133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e3ea2e273f71cf8123de2b7282deb194847ddf4efc346fb7120d55a7fe3db8d\"" May 15 10:10:35.419832 kubelet[1871]: E0515 10:10:35.419761 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:35.420377 kubelet[1871]: E0515 10:10:35.420355 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:35.422541 env[1327]: time="2025-05-15T10:10:35.422508292Z" level=info msg="CreateContainer within sandbox \"4e3ea2e273f71cf8123de2b7282deb194847ddf4efc346fb7120d55a7fe3db8d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 10:10:35.422655 env[1327]: time="2025-05-15T10:10:35.422629070Z" level=info msg="CreateContainer within sandbox \"ecd9304dc26d2ab17afc3a098d31c6b34d2b04dbcca979f43514ff3426495122\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 10:10:35.438668 env[1327]: time="2025-05-15T10:10:35.438622878Z" level=info msg="CreateContainer within sandbox \"3eec7c8829197081c153b3de5187cb7a90776c63d33e58f45fed321d0f189390\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3aa3264261db5a5d08fdb810b86ede517d7fd0b44dd22dc8d3f7f0c7213d9930\"" May 15 10:10:35.439287 env[1327]: time="2025-05-15T10:10:35.439258761Z" level=info msg="StartContainer for \"3aa3264261db5a5d08fdb810b86ede517d7fd0b44dd22dc8d3f7f0c7213d9930\"" May 15 10:10:35.440086 env[1327]: time="2025-05-15T10:10:35.440055294Z" level=info msg="CreateContainer within sandbox \"4e3ea2e273f71cf8123de2b7282deb194847ddf4efc346fb7120d55a7fe3db8d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d1a7794f1f669b22f1235b500f00c49be6d3ee84242398d8ed2dcdb1589e294f\"" May 15 10:10:35.440580 env[1327]: time="2025-05-15T10:10:35.440554242Z" level=info msg="StartContainer for \"d1a7794f1f669b22f1235b500f00c49be6d3ee84242398d8ed2dcdb1589e294f\"" May 15 10:10:35.442562 env[1327]: time="2025-05-15T10:10:35.442529637Z" level=info msg="CreateContainer within sandbox \"ecd9304dc26d2ab17afc3a098d31c6b34d2b04dbcca979f43514ff3426495122\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5378ea0d7a271d79aa43a3e62ef203f4f4fc80b926ea38ffac454e7923e31909\"" May 15 10:10:35.442904 env[1327]: time="2025-05-15T10:10:35.442881892Z" level=info msg="StartContainer for \"5378ea0d7a271d79aa43a3e62ef203f4f4fc80b926ea38ffac454e7923e31909\"" May 15 10:10:35.531087 env[1327]: time="2025-05-15T10:10:35.527495156Z" level=info msg="StartContainer for \"5378ea0d7a271d79aa43a3e62ef203f4f4fc80b926ea38ffac454e7923e31909\" returns successfully" May 15 10:10:35.531087 env[1327]: time="2025-05-15T10:10:35.527495316Z" level=info msg="StartContainer for \"3aa3264261db5a5d08fdb810b86ede517d7fd0b44dd22dc8d3f7f0c7213d9930\" returns successfully" May 15 10:10:35.570792 env[1327]: time="2025-05-15T10:10:35.570714259Z" level=info msg="StartContainer for \"d1a7794f1f669b22f1235b500f00c49be6d3ee84242398d8ed2dcdb1589e294f\" returns successfully" May 15 10:10:35.846374 kubelet[1871]: I0515 10:10:35.846270 1871 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:10:36.378979 kubelet[1871]: E0515 10:10:36.378946 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:36.381082 kubelet[1871]: E0515 10:10:36.381061 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:36.382664 kubelet[1871]: E0515 10:10:36.382640 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:36.878202 kubelet[1871]: E0515 10:10:36.878097 1871 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 10:10:37.037984 kubelet[1871]: I0515 10:10:37.037951 1871 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 10:10:37.334633 kubelet[1871]: I0515 10:10:37.334534 1871 apiserver.go:52] "Watching apiserver" May 15 10:10:37.339928 kubelet[1871]: I0515 10:10:37.339902 1871 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 10:10:37.388521 kubelet[1871]: E0515 10:10:37.388485 1871 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 10:10:37.388974 kubelet[1871]: E0515 10:10:37.388949 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:38.389962 kubelet[1871]: E0515 10:10:38.389930 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:38.948626 systemd[1]: Reloading. May 15 10:10:39.001330 /usr/lib/systemd/system-generators/torcx-generator[2168]: time="2025-05-15T10:10:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:10:39.001358 /usr/lib/systemd/system-generators/torcx-generator[2168]: time="2025-05-15T10:10:39Z" level=info msg="torcx already run" May 15 10:10:39.068410 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:10:39.068433 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:10:39.088389 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:10:39.165102 kubelet[1871]: I0515 10:10:39.165037 1871 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:10:39.165344 systemd[1]: Stopping kubelet.service... May 15 10:10:39.184555 systemd[1]: kubelet.service: Deactivated successfully. May 15 10:10:39.184864 systemd[1]: Stopped kubelet.service. May 15 10:10:39.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:39.185614 kernel: kauditd_printk_skb: 47 callbacks suppressed May 15 10:10:39.185665 kernel: audit: type=1131 audit(1747303839.183:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:39.186612 systemd[1]: Starting kubelet.service... May 15 10:10:39.271409 systemd[1]: Started kubelet.service. May 15 10:10:39.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:39.278250 kernel: audit: type=1130 audit(1747303839.270:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:39.310314 kubelet[2223]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:10:39.310314 kubelet[2223]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 10:10:39.310314 kubelet[2223]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:10:39.310663 kubelet[2223]: I0515 10:10:39.310381 2223 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 10:10:39.314883 kubelet[2223]: I0515 10:10:39.314859 2223 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 10:10:39.314978 kubelet[2223]: I0515 10:10:39.314968 2223 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 10:10:39.315308 kubelet[2223]: I0515 10:10:39.315291 2223 server.go:927] "Client rotation is on, will bootstrap in background" May 15 10:10:39.316652 kubelet[2223]: I0515 10:10:39.316628 2223 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 10:10:39.317888 kubelet[2223]: I0515 10:10:39.317856 2223 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:10:39.323038 kubelet[2223]: I0515 10:10:39.323013 2223 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 10:10:39.323581 kubelet[2223]: I0515 10:10:39.323552 2223 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 10:10:39.324141 kubelet[2223]: I0515 10:10:39.323969 2223 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 10:10:39.324329 kubelet[2223]: I0515 10:10:39.324310 2223 topology_manager.go:138] "Creating topology manager with none policy" May 15 10:10:39.324398 kubelet[2223]: I0515 10:10:39.324381 2223 container_manager_linux.go:301] "Creating device plugin manager" May 15 10:10:39.324487 kubelet[2223]: I0515 10:10:39.324477 2223 state_mem.go:36] "Initialized new in-memory state store" May 15 10:10:39.324649 kubelet[2223]: I0515 10:10:39.324637 2223 kubelet.go:400] "Attempting to sync node with API server" May 15 10:10:39.324734 kubelet[2223]: I0515 10:10:39.324723 2223 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 10:10:39.324813 kubelet[2223]: I0515 10:10:39.324795 2223 kubelet.go:312] "Adding apiserver pod source" May 15 10:10:39.324874 kubelet[2223]: I0515 10:10:39.324865 2223 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 10:10:39.325379 kubelet[2223]: I0515 10:10:39.325359 2223 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 10:10:39.325511 kubelet[2223]: I0515 10:10:39.325496 2223 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 10:10:39.325859 kubelet[2223]: I0515 10:10:39.325843 2223 server.go:1264] "Started kubelet" May 15 10:10:39.325929 kubelet[2223]: I0515 10:10:39.325908 2223 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 10:10:39.327158 kubelet[2223]: I0515 10:10:39.327129 2223 server.go:455] "Adding debug handlers to kubelet server" May 15 10:10:39.331977 kubelet[2223]: I0515 10:10:39.331888 2223 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 10:10:39.332265 kubelet[2223]: I0515 10:10:39.332246 2223 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 10:10:39.337009 kubelet[2223]: I0515 10:10:39.335653 2223 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 15 10:10:39.337009 kubelet[2223]: I0515 10:10:39.335701 2223 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 15 10:10:39.337009 kubelet[2223]: I0515 10:10:39.335724 2223 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 10:10:39.334000 audit[2223]: AVC avc: denied { mac_admin } for pid=2223 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:10:39.334000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 10:10:39.340975 kernel: audit: type=1400 audit(1747303839.334:224): avc: denied { mac_admin } for pid=2223 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:10:39.341516 kernel: audit: type=1401 audit(1747303839.334:224): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 10:10:39.341560 kernel: audit: type=1300 audit(1747303839.334:224): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000e027b0 a1=4000523998 a2=4000e02780 a3=25 items=0 ppid=1 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:39.334000 audit[2223]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000e027b0 a1=4000523998 a2=4000e02780 a3=25 items=0 ppid=1 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:39.334000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 10:10:39.348349 kernel: audit: type=1327 audit(1747303839.334:224): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 10:10:39.334000 audit[2223]: AVC avc: denied { mac_admin } for pid=2223 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:10:39.351590 kernel: audit: type=1400 audit(1747303839.334:225): avc: denied { mac_admin } for pid=2223 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:10:39.351661 kernel: audit: type=1401 audit(1747303839.334:225): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 10:10:39.334000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 10:10:39.334000 audit[2223]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000995a80 a1=40005239b0 a2=4000e02840 a3=25 items=0 ppid=1 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:39.358082 kubelet[2223]: E0515 10:10:39.357853 2223 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 10:10:39.358160 kernel: audit: type=1300 audit(1747303839.334:225): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000995a80 a1=40005239b0 a2=4000e02840 a3=25 items=0 ppid=1 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:39.358195 kernel: audit: type=1327 audit(1747303839.334:225): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 10:10:39.334000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 10:10:39.358429 kubelet[2223]: I0515 10:10:39.358395 2223 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 10:10:39.358512 kubelet[2223]: I0515 10:10:39.358496 2223 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 10:10:39.358649 kubelet[2223]: I0515 10:10:39.358631 2223 reconciler.go:26] "Reconciler: start to sync state" May 15 10:10:39.365261 kubelet[2223]: I0515 10:10:39.365240 2223 factory.go:221] Registration of the containerd container factory successfully May 15 10:10:39.365261 kubelet[2223]: I0515 10:10:39.365258 2223 factory.go:221] Registration of the systemd container factory successfully May 15 10:10:39.365372 kubelet[2223]: I0515 10:10:39.365324 2223 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 10:10:39.370109 kubelet[2223]: I0515 10:10:39.369165 2223 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 10:10:39.370109 kubelet[2223]: I0515 10:10:39.369966 2223 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 10:10:39.370109 kubelet[2223]: I0515 10:10:39.370016 2223 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 10:10:39.370109 kubelet[2223]: I0515 10:10:39.370044 2223 kubelet.go:2337] "Starting kubelet main sync loop" May 15 10:10:39.370109 kubelet[2223]: E0515 10:10:39.370099 2223 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 10:10:39.407077 kubelet[2223]: I0515 10:10:39.407031 2223 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 10:10:39.407209 kubelet[2223]: I0515 10:10:39.407195 2223 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 10:10:39.407353 kubelet[2223]: I0515 10:10:39.407341 2223 state_mem.go:36] "Initialized new in-memory state store" May 15 10:10:39.407557 kubelet[2223]: I0515 10:10:39.407540 2223 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 10:10:39.407636 kubelet[2223]: I0515 10:10:39.407613 2223 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 10:10:39.407697 kubelet[2223]: I0515 10:10:39.407687 2223 policy_none.go:49] "None policy: Start" May 15 10:10:39.408316 kubelet[2223]: I0515 10:10:39.408289 2223 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 10:10:39.408498 kubelet[2223]: I0515 10:10:39.408486 2223 state_mem.go:35] "Initializing new in-memory state store" May 15 10:10:39.408745 kubelet[2223]: I0515 10:10:39.408727 2223 state_mem.go:75] "Updated machine memory state" May 15 10:10:39.409968 kubelet[2223]: I0515 10:10:39.409947 2223 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 10:10:39.408000 audit[2223]: AVC avc: denied { mac_admin } for pid=2223 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:10:39.408000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 10:10:39.408000 audit[2223]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bc3c80 a1=400092b200 a2=4000bc3c50 a3=25 items=0 ppid=1 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:39.408000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 10:10:39.410346 kubelet[2223]: I0515 10:10:39.410329 2223 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 15 10:10:39.410557 kubelet[2223]: I0515 10:10:39.410520 2223 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 10:10:39.411528 kubelet[2223]: I0515 10:10:39.411453 2223 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 10:10:39.470610 kubelet[2223]: I0515 10:10:39.470514 2223 topology_manager.go:215] "Topology Admit Handler" podUID="89d15ac7b96673a0872356cf715a3cbb" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 10:10:39.470898 kubelet[2223]: I0515 10:10:39.470878 2223 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 10:10:39.471083 kubelet[2223]: I0515 10:10:39.471046 2223 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 10:10:39.476852 kubelet[2223]: E0515 10:10:39.476822 2223 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 10:10:39.516888 kubelet[2223]: I0515 10:10:39.516869 2223 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:10:39.522957 kubelet[2223]: I0515 10:10:39.522875 2223 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 15 10:10:39.523109 kubelet[2223]: I0515 10:10:39.523097 2223 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 10:10:39.560910 kubelet[2223]: I0515 10:10:39.560884 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:10:39.561042 kubelet[2223]: I0515 10:10:39.561027 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 10:10:39.561116 kubelet[2223]: I0515 10:10:39.561101 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89d15ac7b96673a0872356cf715a3cbb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"89d15ac7b96673a0872356cf715a3cbb\") " pod="kube-system/kube-apiserver-localhost" May 15 10:10:39.561179 kubelet[2223]: I0515 10:10:39.561168 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:10:39.561268 kubelet[2223]: I0515 10:10:39.561254 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:10:39.561349 kubelet[2223]: I0515 10:10:39.561335 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89d15ac7b96673a0872356cf715a3cbb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"89d15ac7b96673a0872356cf715a3cbb\") " pod="kube-system/kube-apiserver-localhost" May 15 10:10:39.561425 kubelet[2223]: I0515 10:10:39.561412 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89d15ac7b96673a0872356cf715a3cbb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"89d15ac7b96673a0872356cf715a3cbb\") " pod="kube-system/kube-apiserver-localhost" May 15 10:10:39.561503 kubelet[2223]: I0515 10:10:39.561490 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:10:39.561576 kubelet[2223]: I0515 10:10:39.561564 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:10:39.776988 kubelet[2223]: E0515 10:10:39.776895 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:39.777163 kubelet[2223]: E0515 10:10:39.777048 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:39.777304 kubelet[2223]: E0515 10:10:39.777277 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:40.325912 kubelet[2223]: I0515 10:10:40.325866 2223 apiserver.go:52] "Watching apiserver" May 15 10:10:40.359233 kubelet[2223]: I0515 10:10:40.359182 2223 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 10:10:40.384095 kubelet[2223]: E0515 10:10:40.384058 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:40.384684 kubelet[2223]: E0515 10:10:40.384664 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:40.385553 kubelet[2223]: E0515 10:10:40.384873 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:40.405970 kubelet[2223]: I0515 10:10:40.405915 2223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.4058837730000002 podStartE2EDuration="2.405883773s" podCreationTimestamp="2025-05-15 10:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:10:40.405857816 +0000 UTC m=+1.131346278" watchObservedRunningTime="2025-05-15 10:10:40.405883773 +0000 UTC m=+1.131372195" May 15 10:10:40.413654 kubelet[2223]: I0515 10:10:40.413591 2223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.413575185 podStartE2EDuration="1.413575185s" podCreationTimestamp="2025-05-15 10:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:10:40.413054294 +0000 UTC m=+1.138542716" watchObservedRunningTime="2025-05-15 10:10:40.413575185 +0000 UTC m=+1.139063607" May 15 10:10:40.419729 kubelet[2223]: I0515 10:10:40.419665 2223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.419648893 podStartE2EDuration="1.419648893s" podCreationTimestamp="2025-05-15 10:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:10:40.419582622 +0000 UTC m=+1.145071044" watchObservedRunningTime="2025-05-15 10:10:40.419648893 +0000 UTC m=+1.145137275" May 15 10:10:41.386304 kubelet[2223]: E0515 10:10:41.386269 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:44.123733 sudo[1489]: pam_unix(sudo:session): session closed for user root May 15 10:10:44.122000 audit[1489]: USER_END pid=1489 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 10:10:44.122000 audit[1489]: CRED_DISP pid=1489 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 10:10:44.125006 sshd[1483]: pam_unix(sshd:session): session closed for user core May 15 10:10:44.124000 audit[1483]: USER_END pid=1483 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:10:44.124000 audit[1483]: CRED_DISP pid=1483 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:10:44.127628 systemd[1]: sshd@6-10.0.0.43:22-10.0.0.1:33590.service: Deactivated successfully. May 15 10:10:44.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.43:22-10.0.0.1:33590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:10:44.128609 systemd-logind[1310]: Session 7 logged out. Waiting for processes to exit. May 15 10:10:44.128661 systemd[1]: session-7.scope: Deactivated successfully. May 15 10:10:44.129601 systemd-logind[1310]: Removed session 7. May 15 10:10:44.179277 kubelet[2223]: E0515 10:10:44.179242 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:47.757533 kubelet[2223]: E0515 10:10:47.757502 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:48.310172 kubelet[2223]: E0515 10:10:48.308907 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:48.395325 kubelet[2223]: E0515 10:10:48.395288 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:48.396612 kubelet[2223]: E0515 10:10:48.396586 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:49.397166 kubelet[2223]: E0515 10:10:49.397134 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:52.897849 kubelet[2223]: I0515 10:10:52.897797 2223 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 10:10:52.898660 env[1327]: time="2025-05-15T10:10:52.898614758Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 10:10:52.899046 kubelet[2223]: I0515 10:10:52.899029 2223 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 10:10:53.398761 kubelet[2223]: I0515 10:10:53.398720 2223 topology_manager.go:215] "Topology Admit Handler" podUID="635413c6-4225-4a9e-9d4f-981c35372721" podNamespace="kube-system" podName="kube-proxy-ggxlf" May 15 10:10:53.460246 kubelet[2223]: I0515 10:10:53.460198 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/635413c6-4225-4a9e-9d4f-981c35372721-lib-modules\") pod \"kube-proxy-ggxlf\" (UID: \"635413c6-4225-4a9e-9d4f-981c35372721\") " pod="kube-system/kube-proxy-ggxlf" May 15 10:10:53.460246 kubelet[2223]: I0515 10:10:53.460252 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/635413c6-4225-4a9e-9d4f-981c35372721-kube-proxy\") pod \"kube-proxy-ggxlf\" (UID: \"635413c6-4225-4a9e-9d4f-981c35372721\") " pod="kube-system/kube-proxy-ggxlf" May 15 10:10:53.460417 kubelet[2223]: I0515 10:10:53.460272 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/635413c6-4225-4a9e-9d4f-981c35372721-xtables-lock\") pod \"kube-proxy-ggxlf\" (UID: \"635413c6-4225-4a9e-9d4f-981c35372721\") " pod="kube-system/kube-proxy-ggxlf" May 15 10:10:53.460417 kubelet[2223]: I0515 10:10:53.460288 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6xpz\" (UniqueName: \"kubernetes.io/projected/635413c6-4225-4a9e-9d4f-981c35372721-kube-api-access-f6xpz\") pod \"kube-proxy-ggxlf\" (UID: \"635413c6-4225-4a9e-9d4f-981c35372721\") " pod="kube-system/kube-proxy-ggxlf" May 15 10:10:53.702209 kubelet[2223]: E0515 10:10:53.702105 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:53.702929 env[1327]: time="2025-05-15T10:10:53.702893237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ggxlf,Uid:635413c6-4225-4a9e-9d4f-981c35372721,Namespace:kube-system,Attempt:0,}" May 15 10:10:53.717370 env[1327]: time="2025-05-15T10:10:53.717299845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:10:53.717482 env[1327]: time="2025-05-15T10:10:53.717378041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:10:53.717482 env[1327]: time="2025-05-15T10:10:53.717404639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:10:53.717622 env[1327]: time="2025-05-15T10:10:53.717587549Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1fea950b300fa0900cf38ddfeff2a747874f02c36a7ca01cecf8f4857c6b8f9a pid=2318 runtime=io.containerd.runc.v2 May 15 10:10:53.771433 env[1327]: time="2025-05-15T10:10:53.771383562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ggxlf,Uid:635413c6-4225-4a9e-9d4f-981c35372721,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fea950b300fa0900cf38ddfeff2a747874f02c36a7ca01cecf8f4857c6b8f9a\"" May 15 10:10:53.772686 kubelet[2223]: E0515 10:10:53.772278 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:53.774712 env[1327]: time="2025-05-15T10:10:53.774680731Z" level=info msg="CreateContainer within sandbox \"1fea950b300fa0900cf38ddfeff2a747874f02c36a7ca01cecf8f4857c6b8f9a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 10:10:53.791647 env[1327]: time="2025-05-15T10:10:53.791585075Z" level=info msg="CreateContainer within sandbox \"1fea950b300fa0900cf38ddfeff2a747874f02c36a7ca01cecf8f4857c6b8f9a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8a6b9d99278b0fc09ef812df3bd7e1dad015f7e161342c33cec196e6ab9d4122\"" May 15 10:10:53.793068 env[1327]: time="2025-05-15T10:10:53.793035471Z" level=info msg="StartContainer for \"8a6b9d99278b0fc09ef812df3bd7e1dad015f7e161342c33cec196e6ab9d4122\"" May 15 10:10:53.861237 kubelet[2223]: I0515 10:10:53.854777 2223 topology_manager.go:215] "Topology Admit Handler" podUID="04942144-03bf-4e33-a0d7-15aee682929d" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-zpzjf" May 15 10:10:53.861395 env[1327]: time="2025-05-15T10:10:53.858914947Z" level=info msg="StartContainer for \"8a6b9d99278b0fc09ef812df3bd7e1dad015f7e161342c33cec196e6ab9d4122\" returns successfully" May 15 10:10:53.962000 audit[2410]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2410 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:53.963682 kubelet[2223]: I0515 10:10:53.963651 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/04942144-03bf-4e33-a0d7-15aee682929d-var-lib-calico\") pod \"tigera-operator-797db67f8-zpzjf\" (UID: \"04942144-03bf-4e33-a0d7-15aee682929d\") " pod="tigera-operator/tigera-operator-797db67f8-zpzjf" May 15 10:10:53.963944 kubelet[2223]: I0515 10:10:53.963693 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-889r9\" (UniqueName: \"kubernetes.io/projected/04942144-03bf-4e33-a0d7-15aee682929d-kube-api-access-889r9\") pod \"tigera-operator-797db67f8-zpzjf\" (UID: \"04942144-03bf-4e33-a0d7-15aee682929d\") " pod="tigera-operator/tigera-operator-797db67f8-zpzjf" May 15 10:10:53.963972 kernel: kauditd_printk_skb: 9 callbacks suppressed May 15 10:10:53.964003 kernel: audit: type=1325 audit(1747303853.962:232): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2410 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:53.963000 audit[2411]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:53.968056 kernel: audit: type=1325 audit(1747303853.963:233): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:53.963000 audit[2411]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd7f690c0 a2=0 a3=1 items=0 ppid=2368 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:53.971721 kernel: audit: type=1300 audit(1747303853.963:233): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd7f690c0 a2=0 a3=1 items=0 ppid=2368 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:53.963000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 15 10:10:53.973652 kernel: audit: type=1327 audit(1747303853.963:233): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 15 10:10:53.973706 kernel: audit: type=1325 audit(1747303853.966:234): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:53.966000 audit[2412]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:53.966000 audit[2412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5d25e70 a2=0 a3=1 items=0 ppid=2368 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:53.979355 kernel: audit: type=1300 audit(1747303853.966:234): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5d25e70 a2=0 a3=1 items=0 ppid=2368 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:53.966000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 15 10:10:53.981170 kernel: audit: type=1327 audit(1747303853.966:234): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 15 10:10:53.981199 kernel: audit: type=1325 audit(1747303853.967:235): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:53.967000 audit[2413]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:53.967000 audit[2413]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff7f70340 a2=0 a3=1 items=0 ppid=2368 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:53.986574 kernel: audit: type=1300 audit(1747303853.967:235): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff7f70340 a2=0 a3=1 items=0 ppid=2368 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:53.967000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 15 10:10:53.988421 kernel: audit: type=1327 audit(1747303853.967:235): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 15 10:10:53.962000 audit[2410]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd25d1d90 a2=0 a3=1 items=0 ppid=2368 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:53.962000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 15 10:10:53.969000 audit[2414]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:53.969000 audit[2414]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcd30b960 a2=0 a3=1 items=0 ppid=2368 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:53.969000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 15 10:10:53.970000 audit[2415]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2415 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:53.970000 audit[2415]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd5062380 a2=0 a3=1 items=0 ppid=2368 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:53.970000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 15 10:10:54.064000 audit[2416]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.064000 audit[2416]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd326af00 a2=0 a3=1 items=0 ppid=2368 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.064000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 15 10:10:54.068000 audit[2418]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.068000 audit[2418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcd736360 a2=0 a3=1 items=0 ppid=2368 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.068000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 May 15 10:10:54.073000 audit[2422]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.073000 audit[2422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffed3b4d30 a2=0 a3=1 items=0 ppid=2368 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.073000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 May 15 10:10:54.075000 audit[2423]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.075000 audit[2423]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffcf306b0 a2=0 a3=1 items=0 ppid=2368 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.075000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 15 10:10:54.077000 audit[2425]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2425 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.077000 audit[2425]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc1ebd910 a2=0 a3=1 items=0 ppid=2368 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.077000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 15 10:10:54.078000 audit[2426]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.078000 audit[2426]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc79c2c60 a2=0 a3=1 items=0 ppid=2368 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.078000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 15 10:10:54.080000 audit[2428]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.080000 audit[2428]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe065af10 a2=0 a3=1 items=0 ppid=2368 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.080000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 15 10:10:54.083000 audit[2431]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2431 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.083000 audit[2431]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd5d96400 a2=0 a3=1 items=0 ppid=2368 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.083000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 May 15 10:10:54.084000 audit[2432]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.084000 audit[2432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee29a5d0 a2=0 a3=1 items=0 ppid=2368 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.084000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 15 10:10:54.086000 audit[2434]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2434 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.086000 audit[2434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffd02a2b0 a2=0 a3=1 items=0 ppid=2368 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.086000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 15 10:10:54.087000 audit[2435]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2435 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.087000 audit[2435]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe5a92460 a2=0 a3=1 items=0 ppid=2368 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.087000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 15 10:10:54.089000 audit[2437]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.089000 audit[2437]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd03328c0 a2=0 a3=1 items=0 ppid=2368 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.089000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 15 10:10:54.092000 audit[2440]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2440 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.092000 audit[2440]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd656b0e0 a2=0 a3=1 items=0 ppid=2368 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.092000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 15 10:10:54.095000 audit[2443]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.095000 audit[2443]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffc718bc0 a2=0 a3=1 items=0 ppid=2368 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.095000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 15 10:10:54.096000 audit[2444]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.096000 audit[2444]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff5cb8df0 a2=0 a3=1 items=0 ppid=2368 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.096000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 15 10:10:54.098000 audit[2446]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.098000 audit[2446]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe9ad6610 a2=0 a3=1 items=0 ppid=2368 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.098000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 15 10:10:54.101000 audit[2449]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2449 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.101000 audit[2449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdda87cd0 a2=0 a3=1 items=0 ppid=2368 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.101000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 15 10:10:54.102000 audit[2450]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.102000 audit[2450]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe6adbe50 a2=0 a3=1 items=0 ppid=2368 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.102000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 15 10:10:54.104000 audit[2452]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 10:10:54.104000 audit[2452]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffffef4bd30 a2=0 a3=1 items=0 ppid=2368 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.104000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 15 10:10:54.122000 audit[2458]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2458 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:10:54.122000 audit[2458]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=fffff13ad4c0 a2=0 a3=1 items=0 ppid=2368 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.122000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:10:54.132000 audit[2458]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:10:54.132000 audit[2458]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff13ad4c0 a2=0 a3=1 items=0 ppid=2368 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.132000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:10:54.134000 audit[2463]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.134000 audit[2463]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff6f588f0 a2=0 a3=1 items=0 ppid=2368 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.134000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 15 10:10:54.136000 audit[2465]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2465 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.136000 audit[2465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd07d53d0 a2=0 a3=1 items=0 ppid=2368 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.136000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 May 15 10:10:54.139000 audit[2468]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.139000 audit[2468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff2fc92a0 a2=0 a3=1 items=0 ppid=2368 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.139000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 May 15 10:10:54.140000 audit[2469]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.140000 audit[2469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd9c44b90 a2=0 a3=1 items=0 ppid=2368 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.140000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 15 10:10:54.142000 audit[2471]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.142000 audit[2471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd94f9f60 a2=0 a3=1 items=0 ppid=2368 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.142000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 15 10:10:54.143000 audit[2472]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.143000 audit[2472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde1675e0 a2=0 a3=1 items=0 ppid=2368 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.143000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 15 10:10:54.146000 audit[2474]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.146000 audit[2474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffde378770 a2=0 a3=1 items=0 ppid=2368 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.146000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 May 15 10:10:54.149000 audit[2477]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.149000 audit[2477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd95dbd80 a2=0 a3=1 items=0 ppid=2368 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.149000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 15 10:10:54.150000 audit[2478]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.150000 audit[2478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1739800 a2=0 a3=1 items=0 ppid=2368 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.150000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 15 10:10:54.152000 audit[2480]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2480 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.152000 audit[2480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe9ae1c70 a2=0 a3=1 items=0 ppid=2368 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.152000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 15 10:10:54.153000 audit[2481]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.153000 audit[2481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd8954650 a2=0 a3=1 items=0 ppid=2368 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.153000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 15 10:10:54.155000 audit[2483]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.155000 audit[2483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe02c37a0 a2=0 a3=1 items=0 ppid=2368 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.155000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 15 10:10:54.158477 env[1327]: time="2025-05-15T10:10:54.158430818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-zpzjf,Uid:04942144-03bf-4e33-a0d7-15aee682929d,Namespace:tigera-operator,Attempt:0,}" May 15 10:10:54.159000 audit[2486]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2486 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.159000 audit[2486]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc2a6af10 a2=0 a3=1 items=0 ppid=2368 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.159000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 15 10:10:54.163000 audit[2489]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.163000 audit[2489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcd63aa10 a2=0 a3=1 items=0 ppid=2368 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.163000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C May 15 10:10:54.164000 audit[2490]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.164000 audit[2490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc711ede0 a2=0 a3=1 items=0 ppid=2368 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.164000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 15 10:10:54.166000 audit[2492]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.166000 audit[2492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffff68ab00 a2=0 a3=1 items=0 ppid=2368 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.166000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 15 10:10:54.170000 audit[2502]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2502 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.170000 audit[2502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffe7e50250 a2=0 a3=1 items=0 ppid=2368 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.170000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 15 10:10:54.172547 env[1327]: time="2025-05-15T10:10:54.172478857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:10:54.172620 env[1327]: time="2025-05-15T10:10:54.172561693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:10:54.172620 env[1327]: time="2025-05-15T10:10:54.172595331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:10:54.172865 env[1327]: time="2025-05-15T10:10:54.172836318Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/501f25506b9ab8bd1f3587c68ff2ff52bf5bb5f84388b936c8602f678230e58a pid=2503 runtime=io.containerd.runc.v2 May 15 10:10:54.172000 audit[2510]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.172000 audit[2510]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff332e6e0 a2=0 a3=1 items=0 ppid=2368 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.172000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 15 10:10:54.174000 audit[2516]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2516 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.174000 audit[2516]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe55cce50 a2=0 a3=1 items=0 ppid=2368 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.174000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 15 10:10:54.175000 audit[2517]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.175000 audit[2517]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6afe040 a2=0 a3=1 items=0 ppid=2368 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.175000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 15 10:10:54.178000 audit[2524]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2524 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.178000 audit[2524]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc32ac340 a2=0 a3=1 items=0 ppid=2368 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.178000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 15 10:10:54.187733 kubelet[2223]: E0515 10:10:54.187476 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:54.188000 audit[2532]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 10:10:54.188000 audit[2532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcd977950 a2=0 a3=1 items=0 ppid=2368 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 15 10:10:54.197000 audit[2541]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 15 10:10:54.197000 audit[2541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2004 a0=3 a1=fffffda06fd0 a2=0 a3=1 items=0 ppid=2368 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.197000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:10:54.198000 audit[2541]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2541 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 15 10:10:54.198000 audit[2541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffffda06fd0 a2=0 a3=1 items=0 ppid=2368 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:10:54.198000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:10:54.221266 env[1327]: time="2025-05-15T10:10:54.218676996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-zpzjf,Uid:04942144-03bf-4e33-a0d7-15aee682929d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"501f25506b9ab8bd1f3587c68ff2ff52bf5bb5f84388b936c8602f678230e58a\"" May 15 10:10:54.222500 env[1327]: time="2025-05-15T10:10:54.222419673Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 10:10:54.406235 kubelet[2223]: E0515 10:10:54.406197 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:10:54.415952 kubelet[2223]: I0515 10:10:54.415162 2223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ggxlf" podStartSLOduration=1.415145639 podStartE2EDuration="1.415145639s" podCreationTimestamp="2025-05-15 10:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:10:54.415072603 +0000 UTC m=+15.140560985" watchObservedRunningTime="2025-05-15 10:10:54.415145639 +0000 UTC m=+15.140634061" May 15 10:10:54.977123 update_engine[1312]: I0515 10:10:54.977077 1312 update_attempter.cc:509] Updating boot flags... May 15 10:10:55.446192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount464194029.mount: Deactivated successfully. May 15 10:10:56.215333 env[1327]: time="2025-05-15T10:10:56.215284242Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:56.217035 env[1327]: time="2025-05-15T10:10:56.216995761Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:56.218478 env[1327]: time="2025-05-15T10:10:56.218448332Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:56.219985 env[1327]: time="2025-05-15T10:10:56.219955460Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:10:56.220857 env[1327]: time="2025-05-15T10:10:56.220822979Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 15 10:10:56.223846 env[1327]: time="2025-05-15T10:10:56.223813996Z" level=info msg="CreateContainer within sandbox \"501f25506b9ab8bd1f3587c68ff2ff52bf5bb5f84388b936c8602f678230e58a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 10:10:56.234619 env[1327]: time="2025-05-15T10:10:56.234569764Z" level=info msg="CreateContainer within sandbox \"501f25506b9ab8bd1f3587c68ff2ff52bf5bb5f84388b936c8602f678230e58a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"33eca5f3f6f86af3ca411301f0e669315a8cf1c4e9e54e5b6b2132db1e2756a0\"" May 15 10:10:56.235231 env[1327]: time="2025-05-15T10:10:56.235164976Z" level=info msg="StartContainer for \"33eca5f3f6f86af3ca411301f0e669315a8cf1c4e9e54e5b6b2132db1e2756a0\"" May 15 10:10:56.300948 env[1327]: time="2025-05-15T10:10:56.300907288Z" level=info msg="StartContainer for \"33eca5f3f6f86af3ca411301f0e669315a8cf1c4e9e54e5b6b2132db1e2756a0\" returns successfully" May 15 10:11:00.239370 kernel: kauditd_printk_skb: 143 callbacks suppressed May 15 10:11:00.241247 kernel: audit: type=1325 audit(1747303860.231:283): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:00.241283 kernel: audit: type=1300 audit(1747303860.231:283): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffe350d980 a2=0 a3=1 items=0 ppid=2368 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:00.231000 audit[2602]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:00.231000 audit[2602]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffe350d980 a2=0 a3=1 items=0 ppid=2368 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:00.231000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:00.243611 kernel: audit: type=1327 audit(1747303860.231:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:00.241000 audit[2602]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:00.245986 kernel: audit: type=1325 audit(1747303860.241:284): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:00.241000 audit[2602]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe350d980 a2=0 a3=1 items=0 ppid=2368 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:00.241000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:00.252918 kernel: audit: type=1300 audit(1747303860.241:284): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe350d980 a2=0 a3=1 items=0 ppid=2368 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:00.252989 kernel: audit: type=1327 audit(1747303860.241:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:00.270000 audit[2605]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:00.277835 kernel: audit: type=1325 audit(1747303860.270:285): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:00.277906 kernel: audit: type=1300 audit(1747303860.270:285): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffec85b9a0 a2=0 a3=1 items=0 ppid=2368 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:00.270000 audit[2605]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffec85b9a0 a2=0 a3=1 items=0 ppid=2368 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:00.270000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:00.279980 kernel: audit: type=1327 audit(1747303860.270:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:00.279000 audit[2605]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:00.279000 audit[2605]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffec85b9a0 a2=0 a3=1 items=0 ppid=2368 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:00.279000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:00.283256 kernel: audit: type=1325 audit(1747303860.279:286): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:01.616000 audit[2607]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:01.616000 audit[2607]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6652 a0=3 a1=ffffc3bdf900 a2=0 a3=1 items=0 ppid=2368 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:01.616000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:01.624000 audit[2607]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:01.624000 audit[2607]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc3bdf900 a2=0 a3=1 items=0 ppid=2368 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:01.624000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:01.643035 kubelet[2223]: I0515 10:11:01.642839 2223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-zpzjf" podStartSLOduration=6.642223096 podStartE2EDuration="8.642822969s" podCreationTimestamp="2025-05-15 10:10:53 +0000 UTC" firstStartedPulling="2025-05-15 10:10:54.221360211 +0000 UTC m=+14.946848593" lastFinishedPulling="2025-05-15 10:10:56.221960084 +0000 UTC m=+16.947448466" observedRunningTime="2025-05-15 10:10:56.421090329 +0000 UTC m=+17.146578751" watchObservedRunningTime="2025-05-15 10:11:01.642822969 +0000 UTC m=+22.368311591" May 15 10:11:01.644455 kubelet[2223]: I0515 10:11:01.644424 2223 topology_manager.go:215] "Topology Admit Handler" podUID="cd0c77db-32ac-4191-9fe2-a6fe590ff62c" podNamespace="calico-system" podName="calico-typha-5b48c46546-wjhg7" May 15 10:11:01.723431 kubelet[2223]: I0515 10:11:01.723389 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd0c77db-32ac-4191-9fe2-a6fe590ff62c-tigera-ca-bundle\") pod \"calico-typha-5b48c46546-wjhg7\" (UID: \"cd0c77db-32ac-4191-9fe2-a6fe590ff62c\") " pod="calico-system/calico-typha-5b48c46546-wjhg7" May 15 10:11:01.723652 kubelet[2223]: I0515 10:11:01.723631 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cd0c77db-32ac-4191-9fe2-a6fe590ff62c-typha-certs\") pod \"calico-typha-5b48c46546-wjhg7\" (UID: \"cd0c77db-32ac-4191-9fe2-a6fe590ff62c\") " pod="calico-system/calico-typha-5b48c46546-wjhg7" May 15 10:11:01.723733 kubelet[2223]: I0515 10:11:01.723719 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nzgd\" (UniqueName: \"kubernetes.io/projected/cd0c77db-32ac-4191-9fe2-a6fe590ff62c-kube-api-access-8nzgd\") pod \"calico-typha-5b48c46546-wjhg7\" (UID: \"cd0c77db-32ac-4191-9fe2-a6fe590ff62c\") " pod="calico-system/calico-typha-5b48c46546-wjhg7" May 15 10:11:01.837235 kubelet[2223]: I0515 10:11:01.836857 2223 topology_manager.go:215] "Topology Admit Handler" podUID="24191cb1-4e3a-4105-bd4e-f6f4e788f61d" podNamespace="calico-system" podName="calico-node-xnkk4" May 15 10:11:01.925270 kubelet[2223]: I0515 10:11:01.925112 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/24191cb1-4e3a-4105-bd4e-f6f4e788f61d-cni-bin-dir\") pod \"calico-node-xnkk4\" (UID: \"24191cb1-4e3a-4105-bd4e-f6f4e788f61d\") " pod="calico-system/calico-node-xnkk4" May 15 10:11:01.925270 kubelet[2223]: I0515 10:11:01.925162 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/24191cb1-4e3a-4105-bd4e-f6f4e788f61d-var-run-calico\") pod \"calico-node-xnkk4\" (UID: \"24191cb1-4e3a-4105-bd4e-f6f4e788f61d\") " pod="calico-system/calico-node-xnkk4" May 15 10:11:01.925270 kubelet[2223]: I0515 10:11:01.925181 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/24191cb1-4e3a-4105-bd4e-f6f4e788f61d-cni-net-dir\") pod \"calico-node-xnkk4\" (UID: \"24191cb1-4e3a-4105-bd4e-f6f4e788f61d\") " pod="calico-system/calico-node-xnkk4" May 15 10:11:01.925270 kubelet[2223]: I0515 10:11:01.925196 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/24191cb1-4e3a-4105-bd4e-f6f4e788f61d-flexvol-driver-host\") pod \"calico-node-xnkk4\" (UID: \"24191cb1-4e3a-4105-bd4e-f6f4e788f61d\") " pod="calico-system/calico-node-xnkk4" May 15 10:11:01.925270 kubelet[2223]: I0515 10:11:01.925238 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/24191cb1-4e3a-4105-bd4e-f6f4e788f61d-policysync\") pod \"calico-node-xnkk4\" (UID: \"24191cb1-4e3a-4105-bd4e-f6f4e788f61d\") " pod="calico-system/calico-node-xnkk4" May 15 10:11:01.925520 kubelet[2223]: I0515 10:11:01.925256 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/24191cb1-4e3a-4105-bd4e-f6f4e788f61d-var-lib-calico\") pod \"calico-node-xnkk4\" (UID: \"24191cb1-4e3a-4105-bd4e-f6f4e788f61d\") " pod="calico-system/calico-node-xnkk4" May 15 10:11:01.925520 kubelet[2223]: I0515 10:11:01.925273 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24191cb1-4e3a-4105-bd4e-f6f4e788f61d-xtables-lock\") pod \"calico-node-xnkk4\" (UID: \"24191cb1-4e3a-4105-bd4e-f6f4e788f61d\") " pod="calico-system/calico-node-xnkk4" May 15 10:11:01.925520 kubelet[2223]: I0515 10:11:01.925300 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/24191cb1-4e3a-4105-bd4e-f6f4e788f61d-node-certs\") pod \"calico-node-xnkk4\" (UID: \"24191cb1-4e3a-4105-bd4e-f6f4e788f61d\") " pod="calico-system/calico-node-xnkk4" May 15 10:11:01.925520 kubelet[2223]: I0515 10:11:01.925318 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24191cb1-4e3a-4105-bd4e-f6f4e788f61d-tigera-ca-bundle\") pod \"calico-node-xnkk4\" (UID: \"24191cb1-4e3a-4105-bd4e-f6f4e788f61d\") " pod="calico-system/calico-node-xnkk4" May 15 10:11:01.925520 kubelet[2223]: I0515 10:11:01.925337 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24191cb1-4e3a-4105-bd4e-f6f4e788f61d-lib-modules\") pod \"calico-node-xnkk4\" (UID: \"24191cb1-4e3a-4105-bd4e-f6f4e788f61d\") " pod="calico-system/calico-node-xnkk4" May 15 10:11:01.925632 kubelet[2223]: I0515 10:11:01.925384 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/24191cb1-4e3a-4105-bd4e-f6f4e788f61d-cni-log-dir\") pod \"calico-node-xnkk4\" (UID: \"24191cb1-4e3a-4105-bd4e-f6f4e788f61d\") " pod="calico-system/calico-node-xnkk4" May 15 10:11:01.925632 kubelet[2223]: I0515 10:11:01.925421 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2g4t\" (UniqueName: \"kubernetes.io/projected/24191cb1-4e3a-4105-bd4e-f6f4e788f61d-kube-api-access-p2g4t\") pod \"calico-node-xnkk4\" (UID: \"24191cb1-4e3a-4105-bd4e-f6f4e788f61d\") " pod="calico-system/calico-node-xnkk4" May 15 10:11:01.951410 kubelet[2223]: E0515 10:11:01.951379 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:01.951944 env[1327]: time="2025-05-15T10:11:01.951888799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b48c46546-wjhg7,Uid:cd0c77db-32ac-4191-9fe2-a6fe590ff62c,Namespace:calico-system,Attempt:0,}" May 15 10:11:01.980914 env[1327]: time="2025-05-15T10:11:01.980849641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:11:01.980914 env[1327]: time="2025-05-15T10:11:01.980893079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:11:01.980914 env[1327]: time="2025-05-15T10:11:01.980903519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:11:01.981191 env[1327]: time="2025-05-15T10:11:01.981158310Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72948f456a20ef4c3a5c4f8b46a83bc201446819905db85ccdecb661c49190fb pid=2619 runtime=io.containerd.runc.v2 May 15 10:11:02.030805 kubelet[2223]: I0515 10:11:02.029828 2223 topology_manager.go:215] "Topology Admit Handler" podUID="f0cb8081-235c-41eb-97c5-f1fef3d019bf" podNamespace="calico-system" podName="csi-node-driver-lr4rp" May 15 10:11:02.030805 kubelet[2223]: E0515 10:11:02.030135 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lr4rp" podUID="f0cb8081-235c-41eb-97c5-f1fef3d019bf" May 15 10:11:02.031739 kubelet[2223]: E0515 10:11:02.031713 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.031739 kubelet[2223]: W0515 10:11:02.031733 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.031921 kubelet[2223]: E0515 10:11:02.031759 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.032517 kubelet[2223]: E0515 10:11:02.032493 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.032615 kubelet[2223]: W0515 10:11:02.032528 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.032615 kubelet[2223]: E0515 10:11:02.032552 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.036324 kubelet[2223]: E0515 10:11:02.036250 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.036324 kubelet[2223]: W0515 10:11:02.036276 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.036451 kubelet[2223]: E0515 10:11:02.036342 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.038294 kubelet[2223]: E0515 10:11:02.036561 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.038294 kubelet[2223]: W0515 10:11:02.036573 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.038294 kubelet[2223]: E0515 10:11:02.036616 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.038294 kubelet[2223]: E0515 10:11:02.036767 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.038294 kubelet[2223]: W0515 10:11:02.036776 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.038294 kubelet[2223]: E0515 10:11:02.036814 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.038294 kubelet[2223]: E0515 10:11:02.036974 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.038294 kubelet[2223]: W0515 10:11:02.036982 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.038294 kubelet[2223]: E0515 10:11:02.037018 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.038294 kubelet[2223]: E0515 10:11:02.037154 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.038529 kubelet[2223]: W0515 10:11:02.037160 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.038529 kubelet[2223]: E0515 10:11:02.037170 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.038529 kubelet[2223]: E0515 10:11:02.037320 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.038529 kubelet[2223]: W0515 10:11:02.037327 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.038529 kubelet[2223]: E0515 10:11:02.037337 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.038529 kubelet[2223]: E0515 10:11:02.037505 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.038529 kubelet[2223]: W0515 10:11:02.037513 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.038529 kubelet[2223]: E0515 10:11:02.037523 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.039038 kubelet[2223]: E0515 10:11:02.038932 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.039038 kubelet[2223]: W0515 10:11:02.038949 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.039038 kubelet[2223]: E0515 10:11:02.038964 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.039230 kubelet[2223]: E0515 10:11:02.039148 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.039230 kubelet[2223]: W0515 10:11:02.039157 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.039412 kubelet[2223]: E0515 10:11:02.039265 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.039412 kubelet[2223]: E0515 10:11:02.039378 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.039412 kubelet[2223]: W0515 10:11:02.039387 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.039412 kubelet[2223]: E0515 10:11:02.039398 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.039728 kubelet[2223]: E0515 10:11:02.039625 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.039728 kubelet[2223]: W0515 10:11:02.039637 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.039728 kubelet[2223]: E0515 10:11:02.039646 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.050886 kubelet[2223]: E0515 10:11:02.050846 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.050886 kubelet[2223]: W0515 10:11:02.050867 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.050886 kubelet[2223]: E0515 10:11:02.050884 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.066761 env[1327]: time="2025-05-15T10:11:02.066709303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b48c46546-wjhg7,Uid:cd0c77db-32ac-4191-9fe2-a6fe590ff62c,Namespace:calico-system,Attempt:0,} returns sandbox id \"72948f456a20ef4c3a5c4f8b46a83bc201446819905db85ccdecb661c49190fb\"" May 15 10:11:02.067468 kubelet[2223]: E0515 10:11:02.067441 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:02.070087 env[1327]: time="2025-05-15T10:11:02.069108425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 10:11:02.119090 kubelet[2223]: E0515 10:11:02.119039 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.119090 kubelet[2223]: W0515 10:11:02.119061 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.119090 kubelet[2223]: E0515 10:11:02.119079 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.121420 kubelet[2223]: E0515 10:11:02.121388 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.121420 kubelet[2223]: W0515 10:11:02.121408 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.121420 kubelet[2223]: E0515 10:11:02.121423 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.121614 kubelet[2223]: E0515 10:11:02.121588 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.121614 kubelet[2223]: W0515 10:11:02.121600 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.121614 kubelet[2223]: E0515 10:11:02.121608 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.123333 kubelet[2223]: E0515 10:11:02.123309 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.123333 kubelet[2223]: W0515 10:11:02.123328 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.123440 kubelet[2223]: E0515 10:11:02.123340 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.123521 kubelet[2223]: E0515 10:11:02.123505 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.123521 kubelet[2223]: W0515 10:11:02.123512 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.123521 kubelet[2223]: E0515 10:11:02.123520 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.123685 kubelet[2223]: E0515 10:11:02.123670 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.123685 kubelet[2223]: W0515 10:11:02.123682 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.123755 kubelet[2223]: E0515 10:11:02.123692 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.123835 kubelet[2223]: E0515 10:11:02.123822 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.123835 kubelet[2223]: W0515 10:11:02.123832 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.123882 kubelet[2223]: E0515 10:11:02.123839 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.123980 kubelet[2223]: E0515 10:11:02.123970 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.123980 kubelet[2223]: W0515 10:11:02.123980 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.124033 kubelet[2223]: E0515 10:11:02.123987 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.124131 kubelet[2223]: E0515 10:11:02.124120 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.124131 kubelet[2223]: W0515 10:11:02.124130 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.124235 kubelet[2223]: E0515 10:11:02.124138 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.124284 kubelet[2223]: E0515 10:11:02.124273 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.124284 kubelet[2223]: W0515 10:11:02.124283 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.124346 kubelet[2223]: E0515 10:11:02.124290 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.124422 kubelet[2223]: E0515 10:11:02.124412 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.124422 kubelet[2223]: W0515 10:11:02.124420 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.124470 kubelet[2223]: E0515 10:11:02.124427 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.124554 kubelet[2223]: E0515 10:11:02.124545 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.124583 kubelet[2223]: W0515 10:11:02.124554 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.124583 kubelet[2223]: E0515 10:11:02.124560 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.124709 kubelet[2223]: E0515 10:11:02.124696 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.124709 kubelet[2223]: W0515 10:11:02.124706 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.124770 kubelet[2223]: E0515 10:11:02.124712 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.124886 kubelet[2223]: E0515 10:11:02.124871 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.124886 kubelet[2223]: W0515 10:11:02.124884 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.124959 kubelet[2223]: E0515 10:11:02.124894 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.125045 kubelet[2223]: E0515 10:11:02.125032 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.125045 kubelet[2223]: W0515 10:11:02.125044 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.125097 kubelet[2223]: E0515 10:11:02.125051 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.125198 kubelet[2223]: E0515 10:11:02.125183 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.125198 kubelet[2223]: W0515 10:11:02.125192 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.125284 kubelet[2223]: E0515 10:11:02.125200 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.125392 kubelet[2223]: E0515 10:11:02.125352 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.125392 kubelet[2223]: W0515 10:11:02.125363 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.125392 kubelet[2223]: E0515 10:11:02.125370 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.131886 kubelet[2223]: E0515 10:11:02.131841 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.131886 kubelet[2223]: W0515 10:11:02.131864 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.131886 kubelet[2223]: E0515 10:11:02.131877 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.132156 kubelet[2223]: E0515 10:11:02.132136 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.132156 kubelet[2223]: W0515 10:11:02.132149 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.132272 kubelet[2223]: E0515 10:11:02.132159 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.132364 kubelet[2223]: E0515 10:11:02.132340 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.132364 kubelet[2223]: W0515 10:11:02.132351 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.132364 kubelet[2223]: E0515 10:11:02.132362 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.132615 kubelet[2223]: E0515 10:11:02.132591 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.132615 kubelet[2223]: W0515 10:11:02.132604 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.132615 kubelet[2223]: E0515 10:11:02.132613 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.132704 kubelet[2223]: I0515 10:11:02.132641 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f0cb8081-235c-41eb-97c5-f1fef3d019bf-varrun\") pod \"csi-node-driver-lr4rp\" (UID: \"f0cb8081-235c-41eb-97c5-f1fef3d019bf\") " pod="calico-system/csi-node-driver-lr4rp" May 15 10:11:02.132805 kubelet[2223]: E0515 10:11:02.132786 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.132805 kubelet[2223]: W0515 10:11:02.132798 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.132854 kubelet[2223]: E0515 10:11:02.132811 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.132854 kubelet[2223]: I0515 10:11:02.132826 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f0cb8081-235c-41eb-97c5-f1fef3d019bf-socket-dir\") pod \"csi-node-driver-lr4rp\" (UID: \"f0cb8081-235c-41eb-97c5-f1fef3d019bf\") " pod="calico-system/csi-node-driver-lr4rp" May 15 10:11:02.133019 kubelet[2223]: E0515 10:11:02.133003 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.133019 kubelet[2223]: W0515 10:11:02.133016 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.133208 kubelet[2223]: E0515 10:11:02.133029 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.133208 kubelet[2223]: I0515 10:11:02.133043 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0cb8081-235c-41eb-97c5-f1fef3d019bf-kubelet-dir\") pod \"csi-node-driver-lr4rp\" (UID: \"f0cb8081-235c-41eb-97c5-f1fef3d019bf\") " pod="calico-system/csi-node-driver-lr4rp" May 15 10:11:02.134061 kubelet[2223]: E0515 10:11:02.133961 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.134061 kubelet[2223]: W0515 10:11:02.133983 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.134061 kubelet[2223]: E0515 10:11:02.134000 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.134061 kubelet[2223]: I0515 10:11:02.134021 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f0cb8081-235c-41eb-97c5-f1fef3d019bf-registration-dir\") pod \"csi-node-driver-lr4rp\" (UID: \"f0cb8081-235c-41eb-97c5-f1fef3d019bf\") " pod="calico-system/csi-node-driver-lr4rp" May 15 10:11:02.135548 kubelet[2223]: E0515 10:11:02.135438 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.135548 kubelet[2223]: W0515 10:11:02.135453 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.135548 kubelet[2223]: E0515 10:11:02.135471 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.135843 kubelet[2223]: E0515 10:11:02.135728 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.135843 kubelet[2223]: W0515 10:11:02.135741 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.136062 kubelet[2223]: E0515 10:11:02.135969 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.136062 kubelet[2223]: W0515 10:11:02.135981 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.136293 kubelet[2223]: E0515 10:11:02.136184 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.136293 kubelet[2223]: W0515 10:11:02.136196 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.136503 kubelet[2223]: E0515 10:11:02.136412 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.136503 kubelet[2223]: W0515 10:11:02.136423 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.136733 kubelet[2223]: E0515 10:11:02.136616 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.136733 kubelet[2223]: W0515 10:11:02.136629 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.136733 kubelet[2223]: E0515 10:11:02.136640 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.136733 kubelet[2223]: E0515 10:11:02.136651 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.136997 kubelet[2223]: E0515 10:11:02.136884 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.136997 kubelet[2223]: W0515 10:11:02.136896 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.136997 kubelet[2223]: E0515 10:11:02.136905 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.136997 kubelet[2223]: E0515 10:11:02.136917 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.137275 kubelet[2223]: E0515 10:11:02.137157 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.137275 kubelet[2223]: W0515 10:11:02.137170 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.137275 kubelet[2223]: E0515 10:11:02.137179 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.137275 kubelet[2223]: E0515 10:11:02.137190 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.137600 kubelet[2223]: E0515 10:11:02.137443 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.137600 kubelet[2223]: W0515 10:11:02.137455 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.137600 kubelet[2223]: E0515 10:11:02.137465 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.137600 kubelet[2223]: E0515 10:11:02.137477 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.137600 kubelet[2223]: I0515 10:11:02.137494 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klk95\" (UniqueName: \"kubernetes.io/projected/f0cb8081-235c-41eb-97c5-f1fef3d019bf-kube-api-access-klk95\") pod \"csi-node-driver-lr4rp\" (UID: \"f0cb8081-235c-41eb-97c5-f1fef3d019bf\") " pod="calico-system/csi-node-driver-lr4rp" May 15 10:11:02.137878 kubelet[2223]: E0515 10:11:02.137785 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.137878 kubelet[2223]: W0515 10:11:02.137798 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.137878 kubelet[2223]: E0515 10:11:02.137808 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.138065 kubelet[2223]: E0515 10:11:02.138024 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.138065 kubelet[2223]: W0515 10:11:02.138035 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.138065 kubelet[2223]: E0515 10:11:02.138044 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.139908 kubelet[2223]: E0515 10:11:02.139886 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:02.140418 env[1327]: time="2025-05-15T10:11:02.140367923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xnkk4,Uid:24191cb1-4e3a-4105-bd4e-f6f4e788f61d,Namespace:calico-system,Attempt:0,}" May 15 10:11:02.155531 env[1327]: time="2025-05-15T10:11:02.153995683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:11:02.155531 env[1327]: time="2025-05-15T10:11:02.154048521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:11:02.155531 env[1327]: time="2025-05-15T10:11:02.154059401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:11:02.155531 env[1327]: time="2025-05-15T10:11:02.154209196Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da2d86af33b59fbfa9dcd3564d36c92ed222ba53f75dd4a9683059611791db72 pid=2722 runtime=io.containerd.runc.v2 May 15 10:11:02.203761 env[1327]: time="2025-05-15T10:11:02.203652559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xnkk4,Uid:24191cb1-4e3a-4105-bd4e-f6f4e788f61d,Namespace:calico-system,Attempt:0,} returns sandbox id \"da2d86af33b59fbfa9dcd3564d36c92ed222ba53f75dd4a9683059611791db72\"" May 15 10:11:02.206382 kubelet[2223]: E0515 10:11:02.206242 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:02.238741 kubelet[2223]: E0515 10:11:02.238705 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.238741 kubelet[2223]: W0515 10:11:02.238727 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.238741 kubelet[2223]: E0515 10:11:02.238746 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.238986 kubelet[2223]: E0515 10:11:02.238966 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.238986 kubelet[2223]: W0515 10:11:02.238979 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.239063 kubelet[2223]: E0515 10:11:02.238998 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.239197 kubelet[2223]: E0515 10:11:02.239175 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.239197 kubelet[2223]: W0515 10:11:02.239187 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.239197 kubelet[2223]: E0515 10:11:02.239196 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.239384 kubelet[2223]: E0515 10:11:02.239374 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.239384 kubelet[2223]: W0515 10:11:02.239385 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.239460 kubelet[2223]: E0515 10:11:02.239393 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.239603 kubelet[2223]: E0515 10:11:02.239593 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.239603 kubelet[2223]: W0515 10:11:02.239604 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.239671 kubelet[2223]: E0515 10:11:02.239620 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.239810 kubelet[2223]: E0515 10:11:02.239801 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.239810 kubelet[2223]: W0515 10:11:02.239810 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.239881 kubelet[2223]: E0515 10:11:02.239823 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.239977 kubelet[2223]: E0515 10:11:02.239968 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.239977 kubelet[2223]: W0515 10:11:02.239978 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.240035 kubelet[2223]: E0515 10:11:02.239985 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.240126 kubelet[2223]: E0515 10:11:02.240112 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.240170 kubelet[2223]: W0515 10:11:02.240127 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.240288 kubelet[2223]: E0515 10:11:02.240230 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.240288 kubelet[2223]: E0515 10:11:02.240274 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.240288 kubelet[2223]: W0515 10:11:02.240281 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.240431 kubelet[2223]: E0515 10:11:02.240405 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.240431 kubelet[2223]: E0515 10:11:02.240422 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.240506 kubelet[2223]: W0515 10:11:02.240433 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.240506 kubelet[2223]: E0515 10:11:02.240447 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.240598 kubelet[2223]: E0515 10:11:02.240587 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.240598 kubelet[2223]: W0515 10:11:02.240598 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.240720 kubelet[2223]: E0515 10:11:02.240696 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.240720 kubelet[2223]: E0515 10:11:02.240715 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.240798 kubelet[2223]: W0515 10:11:02.240722 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.240866 kubelet[2223]: E0515 10:11:02.240836 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.240866 kubelet[2223]: E0515 10:11:02.240864 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.240950 kubelet[2223]: W0515 10:11:02.240872 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.241018 kubelet[2223]: E0515 10:11:02.240992 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.241064 kubelet[2223]: E0515 10:11:02.241041 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.241064 kubelet[2223]: W0515 10:11:02.241050 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.241064 kubelet[2223]: E0515 10:11:02.241062 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.241240 kubelet[2223]: E0515 10:11:02.241229 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.241240 kubelet[2223]: W0515 10:11:02.241239 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.241314 kubelet[2223]: E0515 10:11:02.241251 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.241493 kubelet[2223]: E0515 10:11:02.241394 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.241493 kubelet[2223]: W0515 10:11:02.241404 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.241493 kubelet[2223]: E0515 10:11:02.241411 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.241786 kubelet[2223]: E0515 10:11:02.241647 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.241786 kubelet[2223]: W0515 10:11:02.241662 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.241786 kubelet[2223]: E0515 10:11:02.241678 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.242095 kubelet[2223]: E0515 10:11:02.241974 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.242095 kubelet[2223]: W0515 10:11:02.241988 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.242095 kubelet[2223]: E0515 10:11:02.241999 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.242393 kubelet[2223]: E0515 10:11:02.242282 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.242393 kubelet[2223]: W0515 10:11:02.242296 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.242393 kubelet[2223]: E0515 10:11:02.242310 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.242644 kubelet[2223]: E0515 10:11:02.242553 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.242644 kubelet[2223]: W0515 10:11:02.242565 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.242644 kubelet[2223]: E0515 10:11:02.242609 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.242899 kubelet[2223]: E0515 10:11:02.242793 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.242899 kubelet[2223]: W0515 10:11:02.242805 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.242899 kubelet[2223]: E0515 10:11:02.242864 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.243160 kubelet[2223]: E0515 10:11:02.243048 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.243160 kubelet[2223]: W0515 10:11:02.243060 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.243160 kubelet[2223]: E0515 10:11:02.243121 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.243698 kubelet[2223]: E0515 10:11:02.243322 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.243698 kubelet[2223]: W0515 10:11:02.243381 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.243698 kubelet[2223]: E0515 10:11:02.243401 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.244101 kubelet[2223]: E0515 10:11:02.243862 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.244101 kubelet[2223]: W0515 10:11:02.243874 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.244101 kubelet[2223]: E0515 10:11:02.243884 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.244353 kubelet[2223]: E0515 10:11:02.244306 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.244353 kubelet[2223]: W0515 10:11:02.244318 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.244353 kubelet[2223]: E0515 10:11:02.244328 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.255422 kubelet[2223]: E0515 10:11:02.255395 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:02.255422 kubelet[2223]: W0515 10:11:02.255416 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:02.255574 kubelet[2223]: E0515 10:11:02.255433 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:02.636000 audit[2783]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2783 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:02.636000 audit[2783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6652 a0=3 a1=ffffe3a80a00 a2=0 a3=1 items=0 ppid=2368 pid=2783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:02.636000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:02.641000 audit[2783]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2783 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:02.641000 audit[2783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe3a80a00 a2=0 a3=1 items=0 ppid=2368 pid=2783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:02.641000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:02.835646 systemd[1]: run-containerd-runc-k8s.io-72948f456a20ef4c3a5c4f8b46a83bc201446819905db85ccdecb661c49190fb-runc.0Azzf6.mount: Deactivated successfully. May 15 10:11:03.371850 kubelet[2223]: E0515 10:11:03.371797 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lr4rp" podUID="f0cb8081-235c-41eb-97c5-f1fef3d019bf" May 15 10:11:03.774598 env[1327]: time="2025-05-15T10:11:03.774546569Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:03.775786 env[1327]: time="2025-05-15T10:11:03.775757332Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:03.777679 env[1327]: time="2025-05-15T10:11:03.777652914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:03.778938 env[1327]: time="2025-05-15T10:11:03.778912716Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:03.779415 env[1327]: time="2025-05-15T10:11:03.779390062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 15 10:11:03.783623 env[1327]: time="2025-05-15T10:11:03.783592695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 10:11:03.814142 env[1327]: time="2025-05-15T10:11:03.814060172Z" level=info msg="CreateContainer within sandbox \"72948f456a20ef4c3a5c4f8b46a83bc201446819905db85ccdecb661c49190fb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 10:11:03.846406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1215562250.mount: Deactivated successfully. May 15 10:11:03.849771 env[1327]: time="2025-05-15T10:11:03.849701292Z" level=info msg="CreateContainer within sandbox \"72948f456a20ef4c3a5c4f8b46a83bc201446819905db85ccdecb661c49190fb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bbfbbd7ff3b44f5774420111915c5e644a828180921a30ca9ad35887be47b61c\"" May 15 10:11:03.851917 env[1327]: time="2025-05-15T10:11:03.851801949Z" level=info msg="StartContainer for \"bbfbbd7ff3b44f5774420111915c5e644a828180921a30ca9ad35887be47b61c\"" May 15 10:11:03.925468 env[1327]: time="2025-05-15T10:11:03.925208285Z" level=info msg="StartContainer for \"bbfbbd7ff3b44f5774420111915c5e644a828180921a30ca9ad35887be47b61c\" returns successfully" May 15 10:11:04.432936 kubelet[2223]: E0515 10:11:04.432885 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:04.449155 kubelet[2223]: E0515 10:11:04.449120 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.449155 kubelet[2223]: W0515 10:11:04.449141 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.449155 kubelet[2223]: E0515 10:11:04.449157 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.449353 kubelet[2223]: E0515 10:11:04.449330 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.449353 kubelet[2223]: W0515 10:11:04.449338 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.449353 kubelet[2223]: E0515 10:11:04.449346 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.449516 kubelet[2223]: E0515 10:11:04.449492 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.449516 kubelet[2223]: W0515 10:11:04.449502 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.449516 kubelet[2223]: E0515 10:11:04.449510 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.449678 kubelet[2223]: E0515 10:11:04.449658 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.449678 kubelet[2223]: W0515 10:11:04.449669 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.449678 kubelet[2223]: E0515 10:11:04.449677 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.449845 kubelet[2223]: E0515 10:11:04.449825 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.449845 kubelet[2223]: W0515 10:11:04.449836 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.449845 kubelet[2223]: E0515 10:11:04.449843 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.449997 kubelet[2223]: E0515 10:11:04.449979 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.449997 kubelet[2223]: W0515 10:11:04.449992 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.450049 kubelet[2223]: E0515 10:11:04.449999 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.450126 kubelet[2223]: E0515 10:11:04.450117 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.450152 kubelet[2223]: W0515 10:11:04.450126 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.450152 kubelet[2223]: E0515 10:11:04.450133 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.450273 kubelet[2223]: E0515 10:11:04.450264 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.450308 kubelet[2223]: W0515 10:11:04.450273 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.450308 kubelet[2223]: E0515 10:11:04.450280 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.450413 kubelet[2223]: E0515 10:11:04.450404 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.450440 kubelet[2223]: W0515 10:11:04.450413 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.450440 kubelet[2223]: E0515 10:11:04.450420 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.450540 kubelet[2223]: E0515 10:11:04.450532 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.450576 kubelet[2223]: W0515 10:11:04.450540 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.450576 kubelet[2223]: E0515 10:11:04.450548 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.450667 kubelet[2223]: E0515 10:11:04.450657 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.450667 kubelet[2223]: W0515 10:11:04.450665 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.450723 kubelet[2223]: E0515 10:11:04.450672 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.450807 kubelet[2223]: E0515 10:11:04.450797 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.450807 kubelet[2223]: W0515 10:11:04.450807 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.450868 kubelet[2223]: E0515 10:11:04.450814 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.450954 kubelet[2223]: E0515 10:11:04.450945 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.450983 kubelet[2223]: W0515 10:11:04.450954 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.450983 kubelet[2223]: E0515 10:11:04.450962 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.451142 kubelet[2223]: E0515 10:11:04.451131 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.451142 kubelet[2223]: W0515 10:11:04.451142 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.451198 kubelet[2223]: E0515 10:11:04.451150 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.451344 kubelet[2223]: E0515 10:11:04.451296 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.451344 kubelet[2223]: W0515 10:11:04.451334 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.451409 kubelet[2223]: E0515 10:11:04.451346 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.460423 kubelet[2223]: E0515 10:11:04.460384 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.460423 kubelet[2223]: W0515 10:11:04.460404 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.460423 kubelet[2223]: E0515 10:11:04.460420 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.460690 kubelet[2223]: E0515 10:11:04.460661 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.460690 kubelet[2223]: W0515 10:11:04.460673 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.460690 kubelet[2223]: E0515 10:11:04.460687 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.460896 kubelet[2223]: E0515 10:11:04.460862 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.460896 kubelet[2223]: W0515 10:11:04.460883 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.460958 kubelet[2223]: E0515 10:11:04.460901 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.461135 kubelet[2223]: E0515 10:11:04.461117 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.461135 kubelet[2223]: W0515 10:11:04.461129 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.461185 kubelet[2223]: E0515 10:11:04.461139 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.461289 kubelet[2223]: E0515 10:11:04.461279 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.461325 kubelet[2223]: W0515 10:11:04.461289 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.461325 kubelet[2223]: E0515 10:11:04.461300 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.461467 kubelet[2223]: E0515 10:11:04.461450 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.461467 kubelet[2223]: W0515 10:11:04.461462 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.461515 kubelet[2223]: E0515 10:11:04.461475 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.461843 kubelet[2223]: E0515 10:11:04.461830 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.461843 kubelet[2223]: W0515 10:11:04.461842 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.462014 kubelet[2223]: E0515 10:11:04.461900 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.462058 kubelet[2223]: E0515 10:11:04.462027 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.462058 kubelet[2223]: W0515 10:11:04.462035 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.462105 kubelet[2223]: E0515 10:11:04.462056 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.462188 kubelet[2223]: E0515 10:11:04.462172 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.462188 kubelet[2223]: W0515 10:11:04.462186 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.462262 kubelet[2223]: E0515 10:11:04.462207 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.462480 kubelet[2223]: E0515 10:11:04.462464 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.462511 kubelet[2223]: W0515 10:11:04.462479 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.462511 kubelet[2223]: E0515 10:11:04.462496 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.462673 kubelet[2223]: E0515 10:11:04.462663 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.462701 kubelet[2223]: W0515 10:11:04.462673 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.462701 kubelet[2223]: E0515 10:11:04.462685 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.462820 kubelet[2223]: E0515 10:11:04.462811 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.462845 kubelet[2223]: W0515 10:11:04.462820 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.462845 kubelet[2223]: E0515 10:11:04.462835 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.463026 kubelet[2223]: E0515 10:11:04.463012 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.463026 kubelet[2223]: W0515 10:11:04.463024 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.463092 kubelet[2223]: E0515 10:11:04.463045 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.463271 kubelet[2223]: E0515 10:11:04.463256 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.463303 kubelet[2223]: W0515 10:11:04.463271 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.463303 kubelet[2223]: E0515 10:11:04.463280 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.463485 kubelet[2223]: E0515 10:11:04.463473 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.463511 kubelet[2223]: W0515 10:11:04.463484 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.463511 kubelet[2223]: E0515 10:11:04.463497 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.463688 kubelet[2223]: E0515 10:11:04.463670 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.463688 kubelet[2223]: W0515 10:11:04.463683 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.463753 kubelet[2223]: E0515 10:11:04.463700 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.463992 kubelet[2223]: E0515 10:11:04.463978 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.464023 kubelet[2223]: W0515 10:11:04.464001 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.464023 kubelet[2223]: E0515 10:11:04.464014 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.464166 kubelet[2223]: E0515 10:11:04.464156 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 10:11:04.464190 kubelet[2223]: W0515 10:11:04.464168 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 10:11:04.464190 kubelet[2223]: E0515 10:11:04.464176 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 10:11:04.836955 env[1327]: time="2025-05-15T10:11:04.836897774Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:04.838242 env[1327]: time="2025-05-15T10:11:04.838183178Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:04.841492 env[1327]: time="2025-05-15T10:11:04.841449405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:04.846243 env[1327]: time="2025-05-15T10:11:04.846200430Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:04.846706 env[1327]: time="2025-05-15T10:11:04.846665497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 15 10:11:04.851060 env[1327]: time="2025-05-15T10:11:04.851007694Z" level=info msg="CreateContainer within sandbox \"da2d86af33b59fbfa9dcd3564d36c92ed222ba53f75dd4a9683059611791db72\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 10:11:04.868275 env[1327]: time="2025-05-15T10:11:04.868210325Z" level=info msg="CreateContainer within sandbox \"da2d86af33b59fbfa9dcd3564d36c92ed222ba53f75dd4a9683059611791db72\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b8ca5e40ebca8626718eda89a12e17e168600029af53b5c9600ad401150b44f1\"" May 15 10:11:04.868699 env[1327]: time="2025-05-15T10:11:04.868670912Z" level=info msg="StartContainer for \"b8ca5e40ebca8626718eda89a12e17e168600029af53b5c9600ad401150b44f1\"" May 15 10:11:04.935434 env[1327]: time="2025-05-15T10:11:04.935386658Z" level=info msg="StartContainer for \"b8ca5e40ebca8626718eda89a12e17e168600029af53b5c9600ad401150b44f1\" returns successfully" May 15 10:11:04.969007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8ca5e40ebca8626718eda89a12e17e168600029af53b5c9600ad401150b44f1-rootfs.mount: Deactivated successfully. May 15 10:11:05.009016 env[1327]: time="2025-05-15T10:11:05.008965902Z" level=info msg="shim disconnected" id=b8ca5e40ebca8626718eda89a12e17e168600029af53b5c9600ad401150b44f1 May 15 10:11:05.009016 env[1327]: time="2025-05-15T10:11:05.009015141Z" level=warning msg="cleaning up after shim disconnected" id=b8ca5e40ebca8626718eda89a12e17e168600029af53b5c9600ad401150b44f1 namespace=k8s.io May 15 10:11:05.009202 env[1327]: time="2025-05-15T10:11:05.009025780Z" level=info msg="cleaning up dead shim" May 15 10:11:05.015763 env[1327]: time="2025-05-15T10:11:05.015720962Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:11:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2905 runtime=io.containerd.runc.v2\n" May 15 10:11:05.370498 kubelet[2223]: E0515 10:11:05.370442 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lr4rp" podUID="f0cb8081-235c-41eb-97c5-f1fef3d019bf" May 15 10:11:05.435792 kubelet[2223]: E0515 10:11:05.435763 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:05.436141 kubelet[2223]: I0515 10:11:05.436115 2223 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 10:11:05.436709 kubelet[2223]: E0515 10:11:05.436677 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:05.436991 env[1327]: time="2025-05-15T10:11:05.436961669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 10:11:05.454256 kubelet[2223]: I0515 10:11:05.454009 2223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b48c46546-wjhg7" podStartSLOduration=2.739493707 podStartE2EDuration="4.453994295s" podCreationTimestamp="2025-05-15 10:11:01 +0000 UTC" firstStartedPulling="2025-05-15 10:11:02.068812155 +0000 UTC m=+22.794300537" lastFinishedPulling="2025-05-15 10:11:03.783312703 +0000 UTC m=+24.508801125" observedRunningTime="2025-05-15 10:11:04.44682929 +0000 UTC m=+25.172317712" watchObservedRunningTime="2025-05-15 10:11:05.453994295 +0000 UTC m=+26.179482717" May 15 10:11:06.778309 systemd[1]: Started sshd@7-10.0.0.43:22-10.0.0.1:58314.service. May 15 10:11:06.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.43:22-10.0.0.1:58314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:06.780094 kernel: kauditd_printk_skb: 14 callbacks suppressed May 15 10:11:06.780173 kernel: audit: type=1130 audit(1747303866.777:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.43:22-10.0.0.1:58314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:06.824000 audit[2926]: USER_ACCT pid=2926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:06.826588 sshd[2926]: Accepted publickey for core from 10.0.0.1 port 58314 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:11:06.827475 sshd[2926]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:11:06.825000 audit[2926]: CRED_ACQ pid=2926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:06.836784 kernel: audit: type=1101 audit(1747303866.824:292): pid=2926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:06.836945 kernel: audit: type=1103 audit(1747303866.825:293): pid=2926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:06.839222 kernel: audit: type=1006 audit(1747303866.825:294): pid=2926 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 May 15 10:11:06.825000 audit[2926]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc1b84450 a2=3 a3=1 items=0 ppid=1 pid=2926 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:06.844398 kernel: audit: type=1300 audit(1747303866.825:294): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc1b84450 a2=3 a3=1 items=0 ppid=1 pid=2926 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:06.825000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:11:06.845543 kernel: audit: type=1327 audit(1747303866.825:294): proctitle=737368643A20636F7265205B707269765D May 15 10:11:06.847117 systemd-logind[1310]: New session 8 of user core. May 15 10:11:06.847619 systemd[1]: Started session-8.scope. May 15 10:11:06.852000 audit[2926]: USER_START pid=2926 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:06.859238 kernel: audit: type=1105 audit(1747303866.852:295): pid=2926 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:06.858000 audit[2929]: CRED_ACQ pid=2929 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:06.864270 kernel: audit: type=1103 audit(1747303866.858:296): pid=2929 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:07.013110 sshd[2926]: pam_unix(sshd:session): session closed for user core May 15 10:11:07.012000 audit[2926]: USER_END pid=2926 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:07.019095 systemd[1]: sshd@7-10.0.0.43:22-10.0.0.1:58314.service: Deactivated successfully. May 15 10:11:07.023550 kernel: audit: type=1106 audit(1747303867.012:297): pid=2926 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:07.023628 kernel: audit: type=1104 audit(1747303867.012:298): pid=2926 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:07.012000 audit[2926]: CRED_DISP pid=2926 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:07.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.43:22-10.0.0.1:58314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:07.019921 systemd[1]: session-8.scope: Deactivated successfully. May 15 10:11:07.023294 systemd-logind[1310]: Session 8 logged out. Waiting for processes to exit. May 15 10:11:07.025971 systemd-logind[1310]: Removed session 8. May 15 10:11:07.370459 kubelet[2223]: E0515 10:11:07.370412 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lr4rp" podUID="f0cb8081-235c-41eb-97c5-f1fef3d019bf" May 15 10:11:08.930670 env[1327]: time="2025-05-15T10:11:08.930623669Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:08.932297 env[1327]: time="2025-05-15T10:11:08.932269273Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:08.934273 env[1327]: time="2025-05-15T10:11:08.934234830Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:08.935684 env[1327]: time="2025-05-15T10:11:08.935659558Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:08.936141 env[1327]: time="2025-05-15T10:11:08.936110068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 15 10:11:08.938474 env[1327]: time="2025-05-15T10:11:08.938435857Z" level=info msg="CreateContainer within sandbox \"da2d86af33b59fbfa9dcd3564d36c92ed222ba53f75dd4a9683059611791db72\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 10:11:08.956466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3645980463.mount: Deactivated successfully. May 15 10:11:08.963835 env[1327]: time="2025-05-15T10:11:08.963775462Z" level=info msg="CreateContainer within sandbox \"da2d86af33b59fbfa9dcd3564d36c92ed222ba53f75dd4a9683059611791db72\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"482e8b64e7cb61917967ed7c936395e1862baa6752ef56c48ec48e3d80fb4d82\"" May 15 10:11:08.964575 env[1327]: time="2025-05-15T10:11:08.964535885Z" level=info msg="StartContainer for \"482e8b64e7cb61917967ed7c936395e1862baa6752ef56c48ec48e3d80fb4d82\"" May 15 10:11:09.043935 env[1327]: time="2025-05-15T10:11:09.043888003Z" level=info msg="StartContainer for \"482e8b64e7cb61917967ed7c936395e1862baa6752ef56c48ec48e3d80fb4d82\" returns successfully" May 15 10:11:09.370627 kubelet[2223]: E0515 10:11:09.370563 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lr4rp" podUID="f0cb8081-235c-41eb-97c5-f1fef3d019bf" May 15 10:11:09.447990 kubelet[2223]: E0515 10:11:09.447956 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:09.659597 env[1327]: time="2025-05-15T10:11:09.659482864Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 10:11:09.666440 kubelet[2223]: I0515 10:11:09.665620 2223 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 15 10:11:09.681491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-482e8b64e7cb61917967ed7c936395e1862baa6752ef56c48ec48e3d80fb4d82-rootfs.mount: Deactivated successfully. May 15 10:11:09.683921 env[1327]: time="2025-05-15T10:11:09.683880682Z" level=info msg="shim disconnected" id=482e8b64e7cb61917967ed7c936395e1862baa6752ef56c48ec48e3d80fb4d82 May 15 10:11:09.684106 env[1327]: time="2025-05-15T10:11:09.684074598Z" level=warning msg="cleaning up after shim disconnected" id=482e8b64e7cb61917967ed7c936395e1862baa6752ef56c48ec48e3d80fb4d82 namespace=k8s.io May 15 10:11:09.684187 env[1327]: time="2025-05-15T10:11:09.684162637Z" level=info msg="cleaning up dead shim" May 15 10:11:09.690042 kubelet[2223]: I0515 10:11:09.689997 2223 topology_manager.go:215] "Topology Admit Handler" podUID="ec6491f9-2d72-4fff-91b5-379e16328d47" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bdvkn" May 15 10:11:09.693598 kubelet[2223]: I0515 10:11:09.693564 2223 topology_manager.go:215] "Topology Admit Handler" podUID="12e389b9-6e26-4c9e-8f17-589ec81bbd99" podNamespace="calico-system" podName="calico-kube-controllers-6777c65db9-lhgd2" May 15 10:11:09.698095 kubelet[2223]: I0515 10:11:09.698063 2223 topology_manager.go:215] "Topology Admit Handler" podUID="a56d6f69-05c6-49eb-910c-8dc8aa5ddf37" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ddwbx" May 15 10:11:09.698225 kubelet[2223]: I0515 10:11:09.698198 2223 topology_manager.go:215] "Topology Admit Handler" podUID="304e1844-0899-4d12-8f60-1c590160ff7b" podNamespace="calico-apiserver" podName="calico-apiserver-6bff68f469-qmms6" May 15 10:11:09.698516 kubelet[2223]: I0515 10:11:09.698489 2223 topology_manager.go:215] "Topology Admit Handler" podUID="5adeff55-662e-40ab-bc79-10150d8d28e3" podNamespace="calico-apiserver" podName="calico-apiserver-6bff68f469-flc89" May 15 10:11:09.706537 env[1327]: time="2025-05-15T10:11:09.706480858Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:11:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2991 runtime=io.containerd.runc.v2\ntime=\"2025-05-15T10:11:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" May 15 10:11:09.809821 kubelet[2223]: I0515 10:11:09.809775 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12e389b9-6e26-4c9e-8f17-589ec81bbd99-tigera-ca-bundle\") pod \"calico-kube-controllers-6777c65db9-lhgd2\" (UID: \"12e389b9-6e26-4c9e-8f17-589ec81bbd99\") " pod="calico-system/calico-kube-controllers-6777c65db9-lhgd2" May 15 10:11:09.809821 kubelet[2223]: I0515 10:11:09.809821 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z855\" (UniqueName: \"kubernetes.io/projected/304e1844-0899-4d12-8f60-1c590160ff7b-kube-api-access-9z855\") pod \"calico-apiserver-6bff68f469-qmms6\" (UID: \"304e1844-0899-4d12-8f60-1c590160ff7b\") " pod="calico-apiserver/calico-apiserver-6bff68f469-qmms6" May 15 10:11:09.810005 kubelet[2223]: I0515 10:11:09.809859 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec6491f9-2d72-4fff-91b5-379e16328d47-config-volume\") pod \"coredns-7db6d8ff4d-bdvkn\" (UID: \"ec6491f9-2d72-4fff-91b5-379e16328d47\") " pod="kube-system/coredns-7db6d8ff4d-bdvkn" May 15 10:11:09.810005 kubelet[2223]: I0515 10:11:09.809879 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzsl5\" (UniqueName: \"kubernetes.io/projected/a56d6f69-05c6-49eb-910c-8dc8aa5ddf37-kube-api-access-fzsl5\") pod \"coredns-7db6d8ff4d-ddwbx\" (UID: \"a56d6f69-05c6-49eb-910c-8dc8aa5ddf37\") " pod="kube-system/coredns-7db6d8ff4d-ddwbx" May 15 10:11:09.810005 kubelet[2223]: I0515 10:11:09.809899 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wmf9\" (UniqueName: \"kubernetes.io/projected/ec6491f9-2d72-4fff-91b5-379e16328d47-kube-api-access-8wmf9\") pod \"coredns-7db6d8ff4d-bdvkn\" (UID: \"ec6491f9-2d72-4fff-91b5-379e16328d47\") " pod="kube-system/coredns-7db6d8ff4d-bdvkn" May 15 10:11:09.810005 kubelet[2223]: I0515 10:11:09.809921 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a56d6f69-05c6-49eb-910c-8dc8aa5ddf37-config-volume\") pod \"coredns-7db6d8ff4d-ddwbx\" (UID: \"a56d6f69-05c6-49eb-910c-8dc8aa5ddf37\") " pod="kube-system/coredns-7db6d8ff4d-ddwbx" May 15 10:11:09.810005 kubelet[2223]: I0515 10:11:09.809958 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/304e1844-0899-4d12-8f60-1c590160ff7b-calico-apiserver-certs\") pod \"calico-apiserver-6bff68f469-qmms6\" (UID: \"304e1844-0899-4d12-8f60-1c590160ff7b\") " pod="calico-apiserver/calico-apiserver-6bff68f469-qmms6" May 15 10:11:09.810159 kubelet[2223]: I0515 10:11:09.809993 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5adeff55-662e-40ab-bc79-10150d8d28e3-calico-apiserver-certs\") pod \"calico-apiserver-6bff68f469-flc89\" (UID: \"5adeff55-662e-40ab-bc79-10150d8d28e3\") " pod="calico-apiserver/calico-apiserver-6bff68f469-flc89" May 15 10:11:09.810159 kubelet[2223]: I0515 10:11:09.810010 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ldf9\" (UniqueName: \"kubernetes.io/projected/5adeff55-662e-40ab-bc79-10150d8d28e3-kube-api-access-2ldf9\") pod \"calico-apiserver-6bff68f469-flc89\" (UID: \"5adeff55-662e-40ab-bc79-10150d8d28e3\") " pod="calico-apiserver/calico-apiserver-6bff68f469-flc89" May 15 10:11:09.810159 kubelet[2223]: I0515 10:11:09.810045 2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xprq\" (UniqueName: \"kubernetes.io/projected/12e389b9-6e26-4c9e-8f17-589ec81bbd99-kube-api-access-4xprq\") pod \"calico-kube-controllers-6777c65db9-lhgd2\" (UID: \"12e389b9-6e26-4c9e-8f17-589ec81bbd99\") " pod="calico-system/calico-kube-controllers-6777c65db9-lhgd2" May 15 10:11:09.992527 kubelet[2223]: E0515 10:11:09.992419 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:09.994410 env[1327]: time="2025-05-15T10:11:09.994068624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bdvkn,Uid:ec6491f9-2d72-4fff-91b5-379e16328d47,Namespace:kube-system,Attempt:0,}" May 15 10:11:09.998732 env[1327]: time="2025-05-15T10:11:09.998683369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6777c65db9-lhgd2,Uid:12e389b9-6e26-4c9e-8f17-589ec81bbd99,Namespace:calico-system,Attempt:0,}" May 15 10:11:10.001643 env[1327]: time="2025-05-15T10:11:10.001467192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bff68f469-qmms6,Uid:304e1844-0899-4d12-8f60-1c590160ff7b,Namespace:calico-apiserver,Attempt:0,}" May 15 10:11:10.002983 env[1327]: time="2025-05-15T10:11:10.002768967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bff68f469-flc89,Uid:5adeff55-662e-40ab-bc79-10150d8d28e3,Namespace:calico-apiserver,Attempt:0,}" May 15 10:11:10.010400 kubelet[2223]: E0515 10:11:10.007598 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:10.010486 env[1327]: time="2025-05-15T10:11:10.010271542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ddwbx,Uid:a56d6f69-05c6-49eb-910c-8dc8aa5ddf37,Namespace:kube-system,Attempt:0,}" May 15 10:11:10.261812 env[1327]: time="2025-05-15T10:11:10.261729655Z" level=error msg="Failed to destroy network for sandbox \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.262169 env[1327]: time="2025-05-15T10:11:10.262130967Z" level=error msg="encountered an error cleaning up failed sandbox \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.262237 env[1327]: time="2025-05-15T10:11:10.262179646Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bff68f469-flc89,Uid:5adeff55-662e-40ab-bc79-10150d8d28e3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.263606 kubelet[2223]: E0515 10:11:10.263223 2223 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.263606 kubelet[2223]: E0515 10:11:10.263301 2223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bff68f469-flc89" May 15 10:11:10.263606 kubelet[2223]: E0515 10:11:10.263322 2223 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bff68f469-flc89" May 15 10:11:10.263781 env[1327]: time="2025-05-15T10:11:10.263433222Z" level=error msg="Failed to destroy network for sandbox \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.263818 kubelet[2223]: E0515 10:11:10.263372 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bff68f469-flc89_calico-apiserver(5adeff55-662e-40ab-bc79-10150d8d28e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bff68f469-flc89_calico-apiserver(5adeff55-662e-40ab-bc79-10150d8d28e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bff68f469-flc89" podUID="5adeff55-662e-40ab-bc79-10150d8d28e3" May 15 10:11:10.264134 env[1327]: time="2025-05-15T10:11:10.264099009Z" level=error msg="encountered an error cleaning up failed sandbox \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.264191 env[1327]: time="2025-05-15T10:11:10.264147688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bdvkn,Uid:ec6491f9-2d72-4fff-91b5-379e16328d47,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.264340 kubelet[2223]: E0515 10:11:10.264308 2223 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.264409 kubelet[2223]: E0515 10:11:10.264349 2223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bdvkn" May 15 10:11:10.264409 kubelet[2223]: E0515 10:11:10.264365 2223 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bdvkn" May 15 10:11:10.264409 kubelet[2223]: E0515 10:11:10.264393 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bdvkn_kube-system(ec6491f9-2d72-4fff-91b5-379e16328d47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bdvkn_kube-system(ec6491f9-2d72-4fff-91b5-379e16328d47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bdvkn" podUID="ec6491f9-2d72-4fff-91b5-379e16328d47" May 15 10:11:10.270102 env[1327]: time="2025-05-15T10:11:10.270055374Z" level=error msg="Failed to destroy network for sandbox \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.270528 env[1327]: time="2025-05-15T10:11:10.270495046Z" level=error msg="encountered an error cleaning up failed sandbox \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.270645 env[1327]: time="2025-05-15T10:11:10.270617203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6777c65db9-lhgd2,Uid:12e389b9-6e26-4c9e-8f17-589ec81bbd99,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.270898 kubelet[2223]: E0515 10:11:10.270867 2223 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.270964 kubelet[2223]: E0515 10:11:10.270914 2223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6777c65db9-lhgd2" May 15 10:11:10.270964 kubelet[2223]: E0515 10:11:10.270935 2223 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6777c65db9-lhgd2" May 15 10:11:10.271020 kubelet[2223]: E0515 10:11:10.270968 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6777c65db9-lhgd2_calico-system(12e389b9-6e26-4c9e-8f17-589ec81bbd99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6777c65db9-lhgd2_calico-system(12e389b9-6e26-4c9e-8f17-589ec81bbd99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6777c65db9-lhgd2" podUID="12e389b9-6e26-4c9e-8f17-589ec81bbd99" May 15 10:11:10.273791 env[1327]: time="2025-05-15T10:11:10.273755543Z" level=error msg="Failed to destroy network for sandbox \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.274177 env[1327]: time="2025-05-15T10:11:10.274145095Z" level=error msg="encountered an error cleaning up failed sandbox \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.274323 env[1327]: time="2025-05-15T10:11:10.274294772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ddwbx,Uid:a56d6f69-05c6-49eb-910c-8dc8aa5ddf37,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.274505 env[1327]: time="2025-05-15T10:11:10.274462769Z" level=error msg="Failed to destroy network for sandbox \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.274567 kubelet[2223]: E0515 10:11:10.274530 2223 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.274606 kubelet[2223]: E0515 10:11:10.274580 2223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ddwbx" May 15 10:11:10.274606 kubelet[2223]: E0515 10:11:10.274600 2223 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ddwbx" May 15 10:11:10.274656 kubelet[2223]: E0515 10:11:10.274632 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ddwbx_kube-system(a56d6f69-05c6-49eb-910c-8dc8aa5ddf37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ddwbx_kube-system(a56d6f69-05c6-49eb-910c-8dc8aa5ddf37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ddwbx" podUID="a56d6f69-05c6-49eb-910c-8dc8aa5ddf37" May 15 10:11:10.274815 env[1327]: time="2025-05-15T10:11:10.274769283Z" level=error msg="encountered an error cleaning up failed sandbox \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.274864 env[1327]: time="2025-05-15T10:11:10.274814682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bff68f469-qmms6,Uid:304e1844-0899-4d12-8f60-1c590160ff7b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.274994 kubelet[2223]: E0515 10:11:10.274959 2223 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.275037 kubelet[2223]: E0515 10:11:10.274996 2223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bff68f469-qmms6" May 15 10:11:10.275037 kubelet[2223]: E0515 10:11:10.275010 2223 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bff68f469-qmms6" May 15 10:11:10.275088 kubelet[2223]: E0515 10:11:10.275036 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bff68f469-qmms6_calico-apiserver(304e1844-0899-4d12-8f60-1c590160ff7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bff68f469-qmms6_calico-apiserver(304e1844-0899-4d12-8f60-1c590160ff7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bff68f469-qmms6" podUID="304e1844-0899-4d12-8f60-1c590160ff7b" May 15 10:11:10.451649 kubelet[2223]: E0515 10:11:10.451602 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:10.453402 env[1327]: time="2025-05-15T10:11:10.453366760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 10:11:10.454548 kubelet[2223]: I0515 10:11:10.454397 2223 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" May 15 10:11:10.455554 env[1327]: time="2025-05-15T10:11:10.455525239Z" level=info msg="StopPodSandbox for \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\"" May 15 10:11:10.460269 kubelet[2223]: I0515 10:11:10.456293 2223 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" May 15 10:11:10.460269 kubelet[2223]: I0515 10:11:10.458309 2223 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" May 15 10:11:10.460407 env[1327]: time="2025-05-15T10:11:10.456671377Z" level=info msg="StopPodSandbox for \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\"" May 15 10:11:10.460407 env[1327]: time="2025-05-15T10:11:10.458704737Z" level=info msg="StopPodSandbox for \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\"" May 15 10:11:10.461802 kubelet[2223]: I0515 10:11:10.461780 2223 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" May 15 10:11:10.462436 env[1327]: time="2025-05-15T10:11:10.462407386Z" level=info msg="StopPodSandbox for \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\"" May 15 10:11:10.464771 kubelet[2223]: I0515 10:11:10.464439 2223 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" May 15 10:11:10.464978 env[1327]: time="2025-05-15T10:11:10.464949577Z" level=info msg="StopPodSandbox for \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\"" May 15 10:11:10.500728 env[1327]: time="2025-05-15T10:11:10.500661848Z" level=error msg="StopPodSandbox for \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\" failed" error="failed to destroy network for sandbox \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.501170 kubelet[2223]: E0515 10:11:10.501005 2223 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" May 15 10:11:10.501170 kubelet[2223]: E0515 10:11:10.501061 2223 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291"} May 15 10:11:10.501170 kubelet[2223]: E0515 10:11:10.501120 2223 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5adeff55-662e-40ab-bc79-10150d8d28e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 10:11:10.501170 kubelet[2223]: E0515 10:11:10.501141 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5adeff55-662e-40ab-bc79-10150d8d28e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bff68f469-flc89" podUID="5adeff55-662e-40ab-bc79-10150d8d28e3" May 15 10:11:10.503882 env[1327]: time="2025-05-15T10:11:10.503836227Z" level=error msg="StopPodSandbox for \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\" failed" error="failed to destroy network for sandbox \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.504030 kubelet[2223]: E0515 10:11:10.503996 2223 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" May 15 10:11:10.504084 kubelet[2223]: E0515 10:11:10.504037 2223 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2"} May 15 10:11:10.504084 kubelet[2223]: E0515 10:11:10.504062 2223 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"12e389b9-6e26-4c9e-8f17-589ec81bbd99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 10:11:10.504171 kubelet[2223]: E0515 10:11:10.504085 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"12e389b9-6e26-4c9e-8f17-589ec81bbd99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6777c65db9-lhgd2" podUID="12e389b9-6e26-4c9e-8f17-589ec81bbd99" May 15 10:11:10.508075 env[1327]: time="2025-05-15T10:11:10.508025627Z" level=error msg="StopPodSandbox for \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\" failed" error="failed to destroy network for sandbox \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.508346 kubelet[2223]: E0515 10:11:10.508240 2223 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" May 15 10:11:10.508346 kubelet[2223]: E0515 10:11:10.508272 2223 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b"} May 15 10:11:10.508346 kubelet[2223]: E0515 10:11:10.508295 2223 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"304e1844-0899-4d12-8f60-1c590160ff7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 10:11:10.508346 kubelet[2223]: E0515 10:11:10.508321 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"304e1844-0899-4d12-8f60-1c590160ff7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bff68f469-qmms6" podUID="304e1844-0899-4d12-8f60-1c590160ff7b" May 15 10:11:10.510870 env[1327]: time="2025-05-15T10:11:10.510827973Z" level=error msg="StopPodSandbox for \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\" failed" error="failed to destroy network for sandbox \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.511017 kubelet[2223]: E0515 10:11:10.510980 2223 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" May 15 10:11:10.511017 kubelet[2223]: E0515 10:11:10.511014 2223 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d"} May 15 10:11:10.511126 kubelet[2223]: E0515 10:11:10.511040 2223 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a56d6f69-05c6-49eb-910c-8dc8aa5ddf37\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 10:11:10.511126 kubelet[2223]: E0515 10:11:10.511058 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a56d6f69-05c6-49eb-910c-8dc8aa5ddf37\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ddwbx" podUID="a56d6f69-05c6-49eb-910c-8dc8aa5ddf37" May 15 10:11:10.523660 env[1327]: time="2025-05-15T10:11:10.522136795Z" level=error msg="StopPodSandbox for \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\" failed" error="failed to destroy network for sandbox \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:10.523729 kubelet[2223]: E0515 10:11:10.522313 2223 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" May 15 10:11:10.523729 kubelet[2223]: E0515 10:11:10.522350 2223 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41"} May 15 10:11:10.523729 kubelet[2223]: E0515 10:11:10.522376 2223 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec6491f9-2d72-4fff-91b5-379e16328d47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 10:11:10.523729 kubelet[2223]: E0515 10:11:10.522393 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec6491f9-2d72-4fff-91b5-379e16328d47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bdvkn" podUID="ec6491f9-2d72-4fff-91b5-379e16328d47" May 15 10:11:10.952563 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b-shm.mount: Deactivated successfully. May 15 10:11:10.952701 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2-shm.mount: Deactivated successfully. May 15 10:11:10.952794 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41-shm.mount: Deactivated successfully. May 15 10:11:11.372668 env[1327]: time="2025-05-15T10:11:11.372627207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lr4rp,Uid:f0cb8081-235c-41eb-97c5-f1fef3d019bf,Namespace:calico-system,Attempt:0,}" May 15 10:11:11.418362 env[1327]: time="2025-05-15T10:11:11.418312981Z" level=error msg="Failed to destroy network for sandbox \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:11.418805 env[1327]: time="2025-05-15T10:11:11.418773693Z" level=error msg="encountered an error cleaning up failed sandbox \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:11.418919 env[1327]: time="2025-05-15T10:11:11.418894531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lr4rp,Uid:f0cb8081-235c-41eb-97c5-f1fef3d019bf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:11.420426 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd-shm.mount: Deactivated successfully. May 15 10:11:11.420641 kubelet[2223]: E0515 10:11:11.420593 2223 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:11.420698 kubelet[2223]: E0515 10:11:11.420655 2223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lr4rp" May 15 10:11:11.420698 kubelet[2223]: E0515 10:11:11.420685 2223 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lr4rp" May 15 10:11:11.420758 kubelet[2223]: E0515 10:11:11.420723 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lr4rp_calico-system(f0cb8081-235c-41eb-97c5-f1fef3d019bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lr4rp_calico-system(f0cb8081-235c-41eb-97c5-f1fef3d019bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lr4rp" podUID="f0cb8081-235c-41eb-97c5-f1fef3d019bf" May 15 10:11:11.469273 kubelet[2223]: I0515 10:11:11.468179 2223 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" May 15 10:11:11.469619 env[1327]: time="2025-05-15T10:11:11.469373458Z" level=info msg="StopPodSandbox for \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\"" May 15 10:11:11.495227 env[1327]: time="2025-05-15T10:11:11.495171552Z" level=error msg="StopPodSandbox for \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\" failed" error="failed to destroy network for sandbox \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 10:11:11.495440 kubelet[2223]: E0515 10:11:11.495396 2223 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" May 15 10:11:11.495494 kubelet[2223]: E0515 10:11:11.495448 2223 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd"} May 15 10:11:11.495494 kubelet[2223]: E0515 10:11:11.495481 2223 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f0cb8081-235c-41eb-97c5-f1fef3d019bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 10:11:11.495574 kubelet[2223]: E0515 10:11:11.495501 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f0cb8081-235c-41eb-97c5-f1fef3d019bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lr4rp" podUID="f0cb8081-235c-41eb-97c5-f1fef3d019bf" May 15 10:11:12.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.43:22-10.0.0.1:58326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:12.016087 systemd[1]: Started sshd@8-10.0.0.43:22-10.0.0.1:58326.service. May 15 10:11:12.019889 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 10:11:12.019962 kernel: audit: type=1130 audit(1747303872.014:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.43:22-10.0.0.1:58326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:12.058000 audit[3374]: USER_ACCT pid=3374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:12.060103 sshd[3374]: Accepted publickey for core from 10.0.0.1 port 58326 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:11:12.064424 sshd[3374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:11:12.059000 audit[3374]: CRED_ACQ pid=3374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:12.068463 kernel: audit: type=1101 audit(1747303872.058:301): pid=3374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:12.068541 kernel: audit: type=1103 audit(1747303872.059:302): pid=3374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:12.071320 kernel: audit: type=1006 audit(1747303872.059:303): pid=3374 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 May 15 10:11:12.072466 kernel: audit: type=1300 audit(1747303872.059:303): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffebadb540 a2=3 a3=1 items=0 ppid=1 pid=3374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:12.059000 audit[3374]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffebadb540 a2=3 a3=1 items=0 ppid=1 pid=3374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:12.071919 systemd-logind[1310]: New session 9 of user core. May 15 10:11:12.072275 systemd[1]: Started session-9.scope. May 15 10:11:12.059000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:11:12.077784 kernel: audit: type=1327 audit(1747303872.059:303): proctitle=737368643A20636F7265205B707269765D May 15 10:11:12.075000 audit[3374]: USER_START pid=3374 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:12.081870 kernel: audit: type=1105 audit(1747303872.075:304): pid=3374 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:12.076000 audit[3377]: CRED_ACQ pid=3377 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:12.085414 kernel: audit: type=1103 audit(1747303872.076:305): pid=3377 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:12.221431 sshd[3374]: pam_unix(sshd:session): session closed for user core May 15 10:11:12.221000 audit[3374]: USER_END pid=3374 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:12.224103 systemd-logind[1310]: Session 9 logged out. Waiting for processes to exit. May 15 10:11:12.224229 systemd[1]: sshd@8-10.0.0.43:22-10.0.0.1:58326.service: Deactivated successfully. May 15 10:11:12.225002 systemd[1]: session-9.scope: Deactivated successfully. May 15 10:11:12.225468 systemd-logind[1310]: Removed session 9. May 15 10:11:12.221000 audit[3374]: CRED_DISP pid=3374 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:12.231446 kernel: audit: type=1106 audit(1747303872.221:306): pid=3374 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:12.231503 kernel: audit: type=1104 audit(1747303872.221:307): pid=3374 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:12.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.43:22-10.0.0.1:58326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:15.960823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1510201606.mount: Deactivated successfully. May 15 10:11:16.212230 env[1327]: time="2025-05-15T10:11:16.212102523Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:16.213512 env[1327]: time="2025-05-15T10:11:16.213482515Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:16.214855 env[1327]: time="2025-05-15T10:11:16.214828348Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:16.216339 env[1327]: time="2025-05-15T10:11:16.216314060Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:16.216818 env[1327]: time="2025-05-15T10:11:16.216790457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 15 10:11:16.230662 env[1327]: time="2025-05-15T10:11:16.230604303Z" level=info msg="CreateContainer within sandbox \"da2d86af33b59fbfa9dcd3564d36c92ed222ba53f75dd4a9683059611791db72\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 10:11:16.242630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1007362373.mount: Deactivated successfully. May 15 10:11:16.246760 env[1327]: time="2025-05-15T10:11:16.246702336Z" level=info msg="CreateContainer within sandbox \"da2d86af33b59fbfa9dcd3564d36c92ed222ba53f75dd4a9683059611791db72\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"22f401d7f87f6dc7def10edbe4377ff4c82475ad8559ab64fcbe17ecd41d63e7\"" May 15 10:11:16.248746 env[1327]: time="2025-05-15T10:11:16.248699485Z" level=info msg="StartContainer for \"22f401d7f87f6dc7def10edbe4377ff4c82475ad8559ab64fcbe17ecd41d63e7\"" May 15 10:11:16.421491 env[1327]: time="2025-05-15T10:11:16.421436235Z" level=info msg="StartContainer for \"22f401d7f87f6dc7def10edbe4377ff4c82475ad8559ab64fcbe17ecd41d63e7\" returns successfully" May 15 10:11:16.483274 kubelet[2223]: E0515 10:11:16.483166 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:16.503753 kubelet[2223]: I0515 10:11:16.503692 2223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xnkk4" podStartSLOduration=1.493179214 podStartE2EDuration="15.503676272s" podCreationTimestamp="2025-05-15 10:11:01 +0000 UTC" firstStartedPulling="2025-05-15 10:11:02.207586352 +0000 UTC m=+22.933074734" lastFinishedPulling="2025-05-15 10:11:16.21808337 +0000 UTC m=+36.943571792" observedRunningTime="2025-05-15 10:11:16.503381873 +0000 UTC m=+37.228870335" watchObservedRunningTime="2025-05-15 10:11:16.503676272 +0000 UTC m=+37.229164694" May 15 10:11:16.555996 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 10:11:16.556170 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 10:11:17.224377 systemd[1]: Started sshd@9-10.0.0.43:22-10.0.0.1:47198.service. May 15 10:11:17.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.43:22-10.0.0.1:47198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:17.225414 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 10:11:17.225479 kernel: audit: type=1130 audit(1747303877.223:309): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.43:22-10.0.0.1:47198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:17.265000 audit[3480]: USER_ACCT pid=3480 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.266552 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 47198 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:11:17.268156 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:11:17.266000 audit[3480]: CRED_ACQ pid=3480 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.272258 systemd-logind[1310]: New session 10 of user core. May 15 10:11:17.272784 kernel: audit: type=1101 audit(1747303877.265:310): pid=3480 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.272824 kernel: audit: type=1103 audit(1747303877.266:311): pid=3480 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.273246 systemd[1]: Started session-10.scope. May 15 10:11:17.274872 kernel: audit: type=1006 audit(1747303877.266:312): pid=3480 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 May 15 10:11:17.274932 kernel: audit: type=1300 audit(1747303877.266:312): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeee9c970 a2=3 a3=1 items=0 ppid=1 pid=3480 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:17.266000 audit[3480]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeee9c970 a2=3 a3=1 items=0 ppid=1 pid=3480 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:17.278583 kernel: audit: type=1327 audit(1747303877.266:312): proctitle=737368643A20636F7265205B707269765D May 15 10:11:17.266000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:11:17.275000 audit[3480]: USER_START pid=3480 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.283581 kernel: audit: type=1105 audit(1747303877.275:313): pid=3480 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.283736 kernel: audit: type=1103 audit(1747303877.276:314): pid=3483 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.276000 audit[3483]: CRED_ACQ pid=3483 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.418662 sshd[3480]: pam_unix(sshd:session): session closed for user core May 15 10:11:17.419000 audit[3480]: USER_END pid=3480 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.421394 systemd[1]: Started sshd@10-10.0.0.43:22-10.0.0.1:47210.service. May 15 10:11:17.423063 systemd-logind[1310]: Session 10 logged out. Waiting for processes to exit. May 15 10:11:17.423246 systemd[1]: sshd@9-10.0.0.43:22-10.0.0.1:47198.service: Deactivated successfully. May 15 10:11:17.424119 systemd[1]: session-10.scope: Deactivated successfully. May 15 10:11:17.424585 systemd-logind[1310]: Removed session 10. May 15 10:11:17.419000 audit[3480]: CRED_DISP pid=3480 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.427767 kernel: audit: type=1106 audit(1747303877.419:315): pid=3480 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.427831 kernel: audit: type=1104 audit(1747303877.419:316): pid=3480 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.43:22-10.0.0.1:47210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:17.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.43:22-10.0.0.1:47198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:17.459000 audit[3493]: USER_ACCT pid=3493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.461319 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 47210 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:11:17.461000 audit[3493]: CRED_ACQ pid=3493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.461000 audit[3493]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff30dc6a0 a2=3 a3=1 items=0 ppid=1 pid=3493 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:17.461000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:11:17.462731 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:11:17.465942 systemd-logind[1310]: New session 11 of user core. May 15 10:11:17.466844 systemd[1]: Started session-11.scope. May 15 10:11:17.468000 audit[3493]: USER_START pid=3493 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.469000 audit[3498]: CRED_ACQ pid=3498 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.480385 kubelet[2223]: E0515 10:11:17.480294 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:17.625668 sshd[3493]: pam_unix(sshd:session): session closed for user core May 15 10:11:17.626975 systemd[1]: Started sshd@11-10.0.0.43:22-10.0.0.1:47224.service. May 15 10:11:17.626000 audit[3493]: USER_END pid=3493 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.626000 audit[3493]: CRED_DISP pid=3493 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.43:22-10.0.0.1:47224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:17.630861 systemd[1]: sshd@10-10.0.0.43:22-10.0.0.1:47210.service: Deactivated successfully. May 15 10:11:17.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.43:22-10.0.0.1:47210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:17.632013 systemd-logind[1310]: Session 11 logged out. Waiting for processes to exit. May 15 10:11:17.632181 systemd[1]: session-11.scope: Deactivated successfully. May 15 10:11:17.633007 systemd-logind[1310]: Removed session 11. May 15 10:11:17.668000 audit[3530]: USER_ACCT pid=3530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.669932 sshd[3530]: Accepted publickey for core from 10.0.0.1 port 47224 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:11:17.669000 audit[3530]: CRED_ACQ pid=3530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.669000 audit[3530]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd1c39a20 a2=3 a3=1 items=0 ppid=1 pid=3530 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:17.669000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:11:17.670985 sshd[3530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:11:17.674095 systemd-logind[1310]: New session 12 of user core. May 15 10:11:17.674892 systemd[1]: Started session-12.scope. May 15 10:11:17.676000 audit[3530]: USER_START pid=3530 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.678000 audit[3535]: CRED_ACQ pid=3535 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.785373 sshd[3530]: pam_unix(sshd:session): session closed for user core May 15 10:11:17.784000 audit[3530]: USER_END pid=3530 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.785000 audit[3530]: CRED_DISP pid=3530 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:17.788199 systemd-logind[1310]: Session 12 logged out. Waiting for processes to exit. May 15 10:11:17.788526 systemd[1]: sshd@11-10.0.0.43:22-10.0.0.1:47224.service: Deactivated successfully. May 15 10:11:17.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.43:22-10.0.0.1:47224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:17.789576 systemd[1]: session-12.scope: Deactivated successfully. May 15 10:11:17.791635 systemd-logind[1310]: Removed session 12. May 15 10:11:17.843000 audit[3589]: AVC avc: denied { write } for pid=3589 comm="tee" name="fd" dev="proc" ino=20000 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 10:11:17.843000 audit[3589]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd8934a2b a2=241 a3=1b6 items=1 ppid=3555 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:17.843000 audit: CWD cwd="/etc/service/enabled/bird/log" May 15 10:11:17.843000 audit: PATH item=0 name="/dev/fd/63" inode=19997 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:11:17.843000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 10:11:17.848000 audit[3606]: AVC avc: denied { write } for pid=3606 comm="tee" name="fd" dev="proc" ino=20513 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 10:11:17.848000 audit[3606]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffef895a2a a2=241 a3=1b6 items=1 ppid=3557 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:17.848000 audit: CWD cwd="/etc/service/enabled/confd/log" May 15 10:11:17.848000 audit: PATH item=0 name="/dev/fd/63" inode=19189 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:11:17.848000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 10:11:17.849000 audit[3608]: AVC avc: denied { write } for pid=3608 comm="tee" name="fd" dev="proc" ino=18200 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 10:11:17.849000 audit[3608]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd0829a2c a2=241 a3=1b6 items=1 ppid=3558 pid=3608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:17.849000 audit: CWD cwd="/etc/service/enabled/cni/log" May 15 10:11:17.849000 audit: PATH item=0 name="/dev/fd/63" inode=19192 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:11:17.849000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 10:11:17.851000 audit[3612]: AVC avc: denied { write } for pid=3612 comm="tee" name="fd" dev="proc" ino=18204 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 10:11:17.851000 audit[3612]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd7974a2a a2=241 a3=1b6 items=1 ppid=3554 pid=3612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:17.851000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 15 10:11:17.851000 audit: PATH item=0 name="/dev/fd/63" inode=20004 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:11:17.851000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 10:11:17.853000 audit[3620]: AVC avc: denied { write } for pid=3620 comm="tee" name="fd" dev="proc" ino=18208 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 10:11:17.853000 audit[3620]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffec0faa2a a2=241 a3=1b6 items=1 ppid=3562 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:17.853000 audit: CWD cwd="/etc/service/enabled/felix/log" May 15 10:11:17.853000 audit: PATH item=0 name="/dev/fd/63" inode=20517 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:11:17.853000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 10:11:17.883000 audit[3631]: AVC avc: denied { write } for pid=3631 comm="tee" name="fd" dev="proc" ino=20521 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 10:11:17.883000 audit[3631]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffce7d0a1a a2=241 a3=1b6 items=1 ppid=3566 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:17.883000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 15 10:11:17.883000 audit: PATH item=0 name="/dev/fd/63" inode=20011 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:11:17.883000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 10:11:17.892000 audit[3636]: AVC avc: denied { write } for pid=3636 comm="tee" name="fd" dev="proc" ino=18212 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 10:11:17.892000 audit[3636]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe253ba1b a2=241 a3=1b6 items=1 ppid=3565 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:17.892000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 15 10:11:17.892000 audit: PATH item=0 name="/dev/fd/63" inode=20518 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:11:17.892000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 10:11:18.481544 kubelet[2223]: E0515 10:11:18.481480 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:21.371677 env[1327]: time="2025-05-15T10:11:21.371631942Z" level=info msg="StopPodSandbox for \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\"" May 15 10:11:21.372662 env[1327]: time="2025-05-15T10:11:21.371995100Z" level=info msg="StopPodSandbox for \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\"" May 15 10:11:21.636324 env[1327]: 2025-05-15 10:11:21.505 [INFO][3774] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" May 15 10:11:21.636324 env[1327]: 2025-05-15 10:11:21.507 [INFO][3774] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" iface="eth0" netns="/var/run/netns/cni-814fca0a-2ebe-b71c-62ad-ce5b17aa9c1b" May 15 10:11:21.636324 env[1327]: 2025-05-15 10:11:21.507 [INFO][3774] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" iface="eth0" netns="/var/run/netns/cni-814fca0a-2ebe-b71c-62ad-ce5b17aa9c1b" May 15 10:11:21.636324 env[1327]: 2025-05-15 10:11:21.509 [INFO][3774] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" iface="eth0" netns="/var/run/netns/cni-814fca0a-2ebe-b71c-62ad-ce5b17aa9c1b" May 15 10:11:21.636324 env[1327]: 2025-05-15 10:11:21.509 [INFO][3774] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" May 15 10:11:21.636324 env[1327]: 2025-05-15 10:11:21.509 [INFO][3774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" May 15 10:11:21.636324 env[1327]: 2025-05-15 10:11:21.606 [INFO][3790] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" HandleID="k8s-pod-network.2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" Workload="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" May 15 10:11:21.636324 env[1327]: 2025-05-15 10:11:21.606 [INFO][3790] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:21.636324 env[1327]: 2025-05-15 10:11:21.606 [INFO][3790] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:21.636324 env[1327]: 2025-05-15 10:11:21.626 [WARNING][3790] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" HandleID="k8s-pod-network.2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" Workload="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" May 15 10:11:21.636324 env[1327]: 2025-05-15 10:11:21.626 [INFO][3790] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" HandleID="k8s-pod-network.2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" Workload="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" May 15 10:11:21.636324 env[1327]: 2025-05-15 10:11:21.627 [INFO][3790] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:21.636324 env[1327]: 2025-05-15 10:11:21.629 [INFO][3774] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" May 15 10:11:21.636324 env[1327]: time="2025-05-15T10:11:21.631481003Z" level=info msg="TearDown network for sandbox \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\" successfully" May 15 10:11:21.636324 env[1327]: time="2025-05-15T10:11:21.631513083Z" level=info msg="StopPodSandbox for \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\" returns successfully" May 15 10:11:21.636324 env[1327]: time="2025-05-15T10:11:21.634516109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bff68f469-flc89,Uid:5adeff55-662e-40ab-bc79-10150d8d28e3,Namespace:calico-apiserver,Attempt:1,}" May 15 10:11:21.633501 systemd[1]: run-netns-cni\x2d814fca0a\x2d2ebe\x2db71c\x2d62ad\x2dce5b17aa9c1b.mount: Deactivated successfully. May 15 10:11:21.657627 env[1327]: 2025-05-15 10:11:21.509 [INFO][3773] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" May 15 10:11:21.657627 env[1327]: 2025-05-15 10:11:21.509 [INFO][3773] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" iface="eth0" netns="/var/run/netns/cni-f7eb006e-150c-3107-af16-80e4792c07a8" May 15 10:11:21.657627 env[1327]: 2025-05-15 10:11:21.509 [INFO][3773] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" iface="eth0" netns="/var/run/netns/cni-f7eb006e-150c-3107-af16-80e4792c07a8" May 15 10:11:21.657627 env[1327]: 2025-05-15 10:11:21.509 [INFO][3773] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" iface="eth0" netns="/var/run/netns/cni-f7eb006e-150c-3107-af16-80e4792c07a8" May 15 10:11:21.657627 env[1327]: 2025-05-15 10:11:21.509 [INFO][3773] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" May 15 10:11:21.657627 env[1327]: 2025-05-15 10:11:21.509 [INFO][3773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" May 15 10:11:21.657627 env[1327]: 2025-05-15 10:11:21.606 [INFO][3792] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" HandleID="k8s-pod-network.c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" Workload="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" May 15 10:11:21.657627 env[1327]: 2025-05-15 10:11:21.606 [INFO][3792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:21.657627 env[1327]: 2025-05-15 10:11:21.627 [INFO][3792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:21.657627 env[1327]: 2025-05-15 10:11:21.650 [WARNING][3792] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" HandleID="k8s-pod-network.c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" Workload="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" May 15 10:11:21.657627 env[1327]: 2025-05-15 10:11:21.650 [INFO][3792] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" HandleID="k8s-pod-network.c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" Workload="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" May 15 10:11:21.657627 env[1327]: 2025-05-15 10:11:21.651 [INFO][3792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:21.657627 env[1327]: 2025-05-15 10:11:21.654 [INFO][3773] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" May 15 10:11:21.659776 systemd[1]: run-netns-cni\x2df7eb006e\x2d150c\x2d3107\x2daf16\x2d80e4792c07a8.mount: Deactivated successfully. May 15 10:11:21.660972 env[1327]: time="2025-05-15T10:11:21.660936985Z" level=info msg="TearDown network for sandbox \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\" successfully" May 15 10:11:21.661035 env[1327]: time="2025-05-15T10:11:21.660971905Z" level=info msg="StopPodSandbox for \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\" returns successfully" May 15 10:11:21.661679 env[1327]: time="2025-05-15T10:11:21.661651702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6777c65db9-lhgd2,Uid:12e389b9-6e26-4c9e-8f17-589ec81bbd99,Namespace:calico-system,Attempt:1,}" May 15 10:11:21.782543 systemd-networkd[1097]: cali7ecf8aecf70: Link UP May 15 10:11:21.785024 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 10:11:21.785117 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7ecf8aecf70: link becomes ready May 15 10:11:21.785257 systemd-networkd[1097]: cali7ecf8aecf70: Gained carrier May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.675 [INFO][3805] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.691 [INFO][3805] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0 calico-apiserver-6bff68f469- calico-apiserver 5adeff55-662e-40ab-bc79-10150d8d28e3 867 0 2025-05-15 10:11:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bff68f469 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6bff68f469-flc89 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7ecf8aecf70 [] []}} ContainerID="039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" Namespace="calico-apiserver" Pod="calico-apiserver-6bff68f469-flc89" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bff68f469--flc89-" May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.691 [INFO][3805] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" Namespace="calico-apiserver" Pod="calico-apiserver-6bff68f469-flc89" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.724 [INFO][3834] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" HandleID="k8s-pod-network.039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" Workload="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.737 [INFO][3834] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" HandleID="k8s-pod-network.039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" Workload="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004d1d10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6bff68f469-flc89", "timestamp":"2025-05-15 10:11:21.724157808 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.737 [INFO][3834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.737 [INFO][3834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.737 [INFO][3834] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.739 [INFO][3834] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" host="localhost" May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.751 [INFO][3834] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.755 [INFO][3834] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.757 [INFO][3834] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.759 [INFO][3834] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.759 [INFO][3834] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" host="localhost" May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.761 [INFO][3834] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.765 [INFO][3834] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" host="localhost" May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.770 [INFO][3834] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" host="localhost" May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.770 [INFO][3834] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" host="localhost" May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.770 [INFO][3834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:21.806343 env[1327]: 2025-05-15 10:11:21.770 [INFO][3834] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" HandleID="k8s-pod-network.039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" Workload="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" May 15 10:11:21.807015 env[1327]: 2025-05-15 10:11:21.773 [INFO][3805] cni-plugin/k8s.go 386: Populated endpoint ContainerID="039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" Namespace="calico-apiserver" Pod="calico-apiserver-6bff68f469-flc89" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0", GenerateName:"calico-apiserver-6bff68f469-", Namespace:"calico-apiserver", SelfLink:"", UID:"5adeff55-662e-40ab-bc79-10150d8d28e3", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bff68f469", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6bff68f469-flc89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ecf8aecf70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:21.807015 env[1327]: 2025-05-15 10:11:21.773 [INFO][3805] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" Namespace="calico-apiserver" Pod="calico-apiserver-6bff68f469-flc89" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" May 15 10:11:21.807015 env[1327]: 2025-05-15 10:11:21.773 [INFO][3805] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ecf8aecf70 ContainerID="039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" Namespace="calico-apiserver" Pod="calico-apiserver-6bff68f469-flc89" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" May 15 10:11:21.807015 env[1327]: 2025-05-15 10:11:21.784 [INFO][3805] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" Namespace="calico-apiserver" Pod="calico-apiserver-6bff68f469-flc89" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" May 15 10:11:21.807015 env[1327]: 2025-05-15 10:11:21.784 [INFO][3805] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" Namespace="calico-apiserver" Pod="calico-apiserver-6bff68f469-flc89" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0", GenerateName:"calico-apiserver-6bff68f469-", Namespace:"calico-apiserver", SelfLink:"", UID:"5adeff55-662e-40ab-bc79-10150d8d28e3", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bff68f469", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d", Pod:"calico-apiserver-6bff68f469-flc89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ecf8aecf70", MAC:"3e:0a:17:8f:6f:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:21.807015 env[1327]: 2025-05-15 10:11:21.798 [INFO][3805] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d" Namespace="calico-apiserver" Pod="calico-apiserver-6bff68f469-flc89" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" May 15 10:11:21.809999 systemd-networkd[1097]: cali72082c87d9f: Link UP May 15 10:11:21.811273 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali72082c87d9f: link becomes ready May 15 10:11:21.811438 systemd-networkd[1097]: cali72082c87d9f: Gained carrier May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.697 [INFO][3818] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.712 [INFO][3818] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0 calico-kube-controllers-6777c65db9- calico-system 12e389b9-6e26-4c9e-8f17-589ec81bbd99 868 0 2025-05-15 10:11:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6777c65db9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6777c65db9-lhgd2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali72082c87d9f [] []}} ContainerID="e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" Namespace="calico-system" Pod="calico-kube-controllers-6777c65db9-lhgd2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-" May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.712 [INFO][3818] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" Namespace="calico-system" Pod="calico-kube-controllers-6777c65db9-lhgd2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.751 [INFO][3843] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" HandleID="k8s-pod-network.e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" Workload="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.765 [INFO][3843] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" HandleID="k8s-pod-network.e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" Workload="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d9100), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6777c65db9-lhgd2", "timestamp":"2025-05-15 10:11:21.751321001 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.765 [INFO][3843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.770 [INFO][3843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.770 [INFO][3843] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.774 [INFO][3843] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" host="localhost" May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.778 [INFO][3843] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.785 [INFO][3843] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.787 [INFO][3843] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.789 [INFO][3843] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.789 [INFO][3843] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" host="localhost" May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.791 [INFO][3843] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.796 [INFO][3843] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" host="localhost" May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.803 [INFO][3843] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" host="localhost" May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.804 [INFO][3843] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" host="localhost" May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.804 [INFO][3843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:21.827443 env[1327]: 2025-05-15 10:11:21.804 [INFO][3843] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" HandleID="k8s-pod-network.e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" Workload="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" May 15 10:11:21.828021 env[1327]: 2025-05-15 10:11:21.806 [INFO][3818] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" Namespace="calico-system" Pod="calico-kube-controllers-6777c65db9-lhgd2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0", GenerateName:"calico-kube-controllers-6777c65db9-", Namespace:"calico-system", SelfLink:"", UID:"12e389b9-6e26-4c9e-8f17-589ec81bbd99", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6777c65db9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6777c65db9-lhgd2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali72082c87d9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:21.828021 env[1327]: 2025-05-15 10:11:21.806 [INFO][3818] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" Namespace="calico-system" Pod="calico-kube-controllers-6777c65db9-lhgd2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" May 15 10:11:21.828021 env[1327]: 2025-05-15 10:11:21.806 [INFO][3818] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72082c87d9f ContainerID="e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" Namespace="calico-system" Pod="calico-kube-controllers-6777c65db9-lhgd2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" May 15 10:11:21.828021 env[1327]: 2025-05-15 10:11:21.811 [INFO][3818] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" Namespace="calico-system" Pod="calico-kube-controllers-6777c65db9-lhgd2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" May 15 10:11:21.828021 env[1327]: 2025-05-15 10:11:21.812 [INFO][3818] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" Namespace="calico-system" Pod="calico-kube-controllers-6777c65db9-lhgd2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0", GenerateName:"calico-kube-controllers-6777c65db9-", Namespace:"calico-system", SelfLink:"", UID:"12e389b9-6e26-4c9e-8f17-589ec81bbd99", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6777c65db9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b", Pod:"calico-kube-controllers-6777c65db9-lhgd2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali72082c87d9f", MAC:"06:b7:56:ad:4c:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:21.828021 env[1327]: 2025-05-15 10:11:21.825 [INFO][3818] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b" Namespace="calico-system" Pod="calico-kube-controllers-6777c65db9-lhgd2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" May 15 10:11:21.828677 env[1327]: time="2025-05-15T10:11:21.828619439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:11:21.828753 env[1327]: time="2025-05-15T10:11:21.828687638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:11:21.828753 env[1327]: time="2025-05-15T10:11:21.828712438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:11:21.829015 env[1327]: time="2025-05-15T10:11:21.828865237Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d pid=3874 runtime=io.containerd.runc.v2 May 15 10:11:21.842165 env[1327]: time="2025-05-15T10:11:21.840637982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:11:21.842165 env[1327]: time="2025-05-15T10:11:21.840737542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:11:21.842165 env[1327]: time="2025-05-15T10:11:21.840763022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:11:21.842165 env[1327]: time="2025-05-15T10:11:21.841002140Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b pid=3912 runtime=io.containerd.runc.v2 May 15 10:11:21.883272 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:11:21.905265 env[1327]: time="2025-05-15T10:11:21.903644967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bff68f469-flc89,Uid:5adeff55-662e-40ab-bc79-10150d8d28e3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d\"" May 15 10:11:21.906469 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:11:21.906997 env[1327]: time="2025-05-15T10:11:21.906947111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 10:11:21.929124 env[1327]: time="2025-05-15T10:11:21.929078807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6777c65db9-lhgd2,Uid:12e389b9-6e26-4c9e-8f17-589ec81bbd99,Namespace:calico-system,Attempt:1,} returns sandbox id \"e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b\"" May 15 10:11:22.371764 env[1327]: time="2025-05-15T10:11:22.371712498Z" level=info msg="StopPodSandbox for \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\"" May 15 10:11:22.372117 env[1327]: time="2025-05-15T10:11:22.371726978Z" level=info msg="StopPodSandbox for \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\"" May 15 10:11:22.457682 env[1327]: 2025-05-15 10:11:22.422 [INFO][4017] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" May 15 10:11:22.457682 env[1327]: 2025-05-15 10:11:22.422 [INFO][4017] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" iface="eth0" netns="/var/run/netns/cni-537e58e1-b5fb-c6ef-3347-ecc0f3bb6d3a" May 15 10:11:22.457682 env[1327]: 2025-05-15 10:11:22.422 [INFO][4017] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" iface="eth0" netns="/var/run/netns/cni-537e58e1-b5fb-c6ef-3347-ecc0f3bb6d3a" May 15 10:11:22.457682 env[1327]: 2025-05-15 10:11:22.423 [INFO][4017] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" iface="eth0" netns="/var/run/netns/cni-537e58e1-b5fb-c6ef-3347-ecc0f3bb6d3a" May 15 10:11:22.457682 env[1327]: 2025-05-15 10:11:22.423 [INFO][4017] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" May 15 10:11:22.457682 env[1327]: 2025-05-15 10:11:22.423 [INFO][4017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" May 15 10:11:22.457682 env[1327]: 2025-05-15 10:11:22.444 [INFO][4033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" HandleID="k8s-pod-network.42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" Workload="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" May 15 10:11:22.457682 env[1327]: 2025-05-15 10:11:22.444 [INFO][4033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:22.457682 env[1327]: 2025-05-15 10:11:22.444 [INFO][4033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:22.457682 env[1327]: 2025-05-15 10:11:22.452 [WARNING][4033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" HandleID="k8s-pod-network.42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" Workload="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" May 15 10:11:22.457682 env[1327]: 2025-05-15 10:11:22.452 [INFO][4033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" HandleID="k8s-pod-network.42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" Workload="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" May 15 10:11:22.457682 env[1327]: 2025-05-15 10:11:22.454 [INFO][4033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:22.457682 env[1327]: 2025-05-15 10:11:22.455 [INFO][4017] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" May 15 10:11:22.458098 env[1327]: time="2025-05-15T10:11:22.457815305Z" level=info msg="TearDown network for sandbox \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\" successfully" May 15 10:11:22.458098 env[1327]: time="2025-05-15T10:11:22.457853665Z" level=info msg="StopPodSandbox for \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\" returns successfully" May 15 10:11:22.458557 env[1327]: time="2025-05-15T10:11:22.458516582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bff68f469-qmms6,Uid:304e1844-0899-4d12-8f60-1c590160ff7b,Namespace:calico-apiserver,Attempt:1,}" May 15 10:11:22.471425 env[1327]: 2025-05-15 10:11:22.426 [INFO][4016] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" May 15 10:11:22.471425 env[1327]: 2025-05-15 10:11:22.426 [INFO][4016] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" iface="eth0" netns="/var/run/netns/cni-962efbe7-183c-4229-af3d-557f1e248930" May 15 10:11:22.471425 env[1327]: 2025-05-15 10:11:22.426 [INFO][4016] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" iface="eth0" netns="/var/run/netns/cni-962efbe7-183c-4229-af3d-557f1e248930" May 15 10:11:22.471425 env[1327]: 2025-05-15 10:11:22.426 [INFO][4016] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" iface="eth0" netns="/var/run/netns/cni-962efbe7-183c-4229-af3d-557f1e248930" May 15 10:11:22.471425 env[1327]: 2025-05-15 10:11:22.426 [INFO][4016] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" May 15 10:11:22.471425 env[1327]: 2025-05-15 10:11:22.426 [INFO][4016] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" May 15 10:11:22.471425 env[1327]: 2025-05-15 10:11:22.448 [INFO][4039] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" HandleID="k8s-pod-network.f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" Workload="localhost-k8s-csi--node--driver--lr4rp-eth0" May 15 10:11:22.471425 env[1327]: 2025-05-15 10:11:22.448 [INFO][4039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:22.471425 env[1327]: 2025-05-15 10:11:22.454 [INFO][4039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:22.471425 env[1327]: 2025-05-15 10:11:22.465 [WARNING][4039] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" HandleID="k8s-pod-network.f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" Workload="localhost-k8s-csi--node--driver--lr4rp-eth0" May 15 10:11:22.471425 env[1327]: 2025-05-15 10:11:22.465 [INFO][4039] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" HandleID="k8s-pod-network.f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" Workload="localhost-k8s-csi--node--driver--lr4rp-eth0" May 15 10:11:22.471425 env[1327]: 2025-05-15 10:11:22.466 [INFO][4039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:22.471425 env[1327]: 2025-05-15 10:11:22.469 [INFO][4016] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" May 15 10:11:22.471809 env[1327]: time="2025-05-15T10:11:22.471526483Z" level=info msg="TearDown network for sandbox \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\" successfully" May 15 10:11:22.471809 env[1327]: time="2025-05-15T10:11:22.471552163Z" level=info msg="StopPodSandbox for \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\" returns successfully" May 15 10:11:22.473443 env[1327]: time="2025-05-15T10:11:22.473317955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lr4rp,Uid:f0cb8081-235c-41eb-97c5-f1fef3d019bf,Namespace:calico-system,Attempt:1,}" May 15 10:11:22.593439 systemd-networkd[1097]: cali9f8738b3fa3: Link UP May 15 10:11:22.594505 systemd-networkd[1097]: cali9f8738b3fa3: Gained carrier May 15 10:11:22.595358 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9f8738b3fa3: link becomes ready May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.489 [INFO][4050] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.504 [INFO][4050] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0 calico-apiserver-6bff68f469- calico-apiserver 304e1844-0899-4d12-8f60-1c590160ff7b 882 0 2025-05-15 10:11:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bff68f469 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6bff68f469-qmms6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9f8738b3fa3 [] []}} ContainerID="f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" Namespace="calico-apiserver" Pod="calico-apiserver-6bff68f469-qmms6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bff68f469--qmms6-" May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.504 [INFO][4050] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" Namespace="calico-apiserver" Pod="calico-apiserver-6bff68f469-qmms6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.539 [INFO][4081] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" HandleID="k8s-pod-network.f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" Workload="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.551 [INFO][4081] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" HandleID="k8s-pod-network.f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" Workload="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f5ef0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6bff68f469-qmms6", "timestamp":"2025-05-15 10:11:22.539394533 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.552 [INFO][4081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.552 [INFO][4081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.552 [INFO][4081] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.553 [INFO][4081] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" host="localhost" May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.563 [INFO][4081] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.568 [INFO][4081] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.571 [INFO][4081] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.574 [INFO][4081] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.574 [INFO][4081] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" host="localhost" May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.576 [INFO][4081] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21 May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.581 [INFO][4081] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" host="localhost" May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.588 [INFO][4081] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" host="localhost" May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.588 [INFO][4081] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" host="localhost" May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.588 [INFO][4081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:22.605346 env[1327]: 2025-05-15 10:11:22.588 [INFO][4081] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" HandleID="k8s-pod-network.f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" Workload="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" May 15 10:11:22.605947 env[1327]: 2025-05-15 10:11:22.590 [INFO][4050] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" Namespace="calico-apiserver" Pod="calico-apiserver-6bff68f469-qmms6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0", GenerateName:"calico-apiserver-6bff68f469-", Namespace:"calico-apiserver", SelfLink:"", UID:"304e1844-0899-4d12-8f60-1c590160ff7b", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bff68f469", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6bff68f469-qmms6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f8738b3fa3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:22.605947 env[1327]: 2025-05-15 10:11:22.591 [INFO][4050] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" Namespace="calico-apiserver" Pod="calico-apiserver-6bff68f469-qmms6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" May 15 10:11:22.605947 env[1327]: 2025-05-15 10:11:22.591 [INFO][4050] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f8738b3fa3 ContainerID="f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" Namespace="calico-apiserver" Pod="calico-apiserver-6bff68f469-qmms6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" May 15 10:11:22.605947 env[1327]: 2025-05-15 10:11:22.594 [INFO][4050] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" Namespace="calico-apiserver" Pod="calico-apiserver-6bff68f469-qmms6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" May 15 10:11:22.605947 env[1327]: 2025-05-15 10:11:22.594 [INFO][4050] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" Namespace="calico-apiserver" Pod="calico-apiserver-6bff68f469-qmms6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0", GenerateName:"calico-apiserver-6bff68f469-", Namespace:"calico-apiserver", SelfLink:"", UID:"304e1844-0899-4d12-8f60-1c590160ff7b", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bff68f469", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21", Pod:"calico-apiserver-6bff68f469-qmms6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f8738b3fa3", MAC:"86:0a:de:61:ea:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:22.605947 env[1327]: 2025-05-15 10:11:22.603 [INFO][4050] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21" Namespace="calico-apiserver" Pod="calico-apiserver-6bff68f469-qmms6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" May 15 10:11:22.619281 env[1327]: time="2025-05-15T10:11:22.619206729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:11:22.619281 env[1327]: time="2025-05-15T10:11:22.619258889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:11:22.619431 env[1327]: time="2025-05-15T10:11:22.619268609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:11:22.619624 env[1327]: time="2025-05-15T10:11:22.619592607Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21 pid=4117 runtime=io.containerd.runc.v2 May 15 10:11:22.630351 systemd-networkd[1097]: cali38bd2555809: Link UP May 15 10:11:22.641711 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali38bd2555809: link becomes ready May 15 10:11:22.631733 systemd-networkd[1097]: cali38bd2555809: Gained carrier May 15 10:11:22.637297 systemd[1]: run-netns-cni\x2d962efbe7\x2d183c\x2d4229\x2daf3d\x2d557f1e248930.mount: Deactivated successfully. May 15 10:11:22.637417 systemd[1]: run-netns-cni\x2d537e58e1\x2db5fb\x2dc6ef\x2d3347\x2decc0f3bb6d3a.mount: Deactivated successfully. May 15 10:11:22.666850 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.512 [INFO][4064] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.526 [INFO][4064] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lr4rp-eth0 csi-node-driver- calico-system f0cb8081-235c-41eb-97c5-f1fef3d019bf 883 0 2025-05-15 10:11:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-lr4rp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali38bd2555809 [] []}} ContainerID="8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" Namespace="calico-system" Pod="csi-node-driver-lr4rp" WorkloadEndpoint="localhost-k8s-csi--node--driver--lr4rp-" May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.527 [INFO][4064] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" Namespace="calico-system" Pod="csi-node-driver-lr4rp" WorkloadEndpoint="localhost-k8s-csi--node--driver--lr4rp-eth0" May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.554 [INFO][4089] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" HandleID="k8s-pod-network.8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" Workload="localhost-k8s-csi--node--driver--lr4rp-eth0" May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.571 [INFO][4089] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" HandleID="k8s-pod-network.8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" Workload="localhost-k8s-csi--node--driver--lr4rp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d9350), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lr4rp", "timestamp":"2025-05-15 10:11:22.554012706 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.571 [INFO][4089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.588 [INFO][4089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.588 [INFO][4089] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.590 [INFO][4089] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" host="localhost" May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.596 [INFO][4089] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.606 [INFO][4089] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.608 [INFO][4089] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.611 [INFO][4089] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.611 [INFO][4089] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" host="localhost" May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.612 [INFO][4089] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.618 [INFO][4089] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" host="localhost" May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.624 [INFO][4089] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" host="localhost" May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.624 [INFO][4089] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" host="localhost" May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.624 [INFO][4089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:22.667977 env[1327]: 2025-05-15 10:11:22.624 [INFO][4089] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" HandleID="k8s-pod-network.8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" Workload="localhost-k8s-csi--node--driver--lr4rp-eth0" May 15 10:11:22.668473 env[1327]: 2025-05-15 10:11:22.628 [INFO][4064] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" Namespace="calico-system" Pod="csi-node-driver-lr4rp" WorkloadEndpoint="localhost-k8s-csi--node--driver--lr4rp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lr4rp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0cb8081-235c-41eb-97c5-f1fef3d019bf", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lr4rp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38bd2555809", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:22.668473 env[1327]: 2025-05-15 10:11:22.628 [INFO][4064] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" Namespace="calico-system" Pod="csi-node-driver-lr4rp" WorkloadEndpoint="localhost-k8s-csi--node--driver--lr4rp-eth0" May 15 10:11:22.668473 env[1327]: 2025-05-15 10:11:22.628 [INFO][4064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38bd2555809 ContainerID="8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" Namespace="calico-system" Pod="csi-node-driver-lr4rp" WorkloadEndpoint="localhost-k8s-csi--node--driver--lr4rp-eth0" May 15 10:11:22.668473 env[1327]: 2025-05-15 10:11:22.644 [INFO][4064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" Namespace="calico-system" Pod="csi-node-driver-lr4rp" WorkloadEndpoint="localhost-k8s-csi--node--driver--lr4rp-eth0" May 15 10:11:22.668473 env[1327]: 2025-05-15 10:11:22.651 [INFO][4064] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" Namespace="calico-system" Pod="csi-node-driver-lr4rp" WorkloadEndpoint="localhost-k8s-csi--node--driver--lr4rp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lr4rp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0cb8081-235c-41eb-97c5-f1fef3d019bf", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab", Pod:"csi-node-driver-lr4rp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38bd2555809", MAC:"5a:be:19:75:04:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:22.668473 env[1327]: 2025-05-15 10:11:22.665 [INFO][4064] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab" Namespace="calico-system" Pod="csi-node-driver-lr4rp" WorkloadEndpoint="localhost-k8s-csi--node--driver--lr4rp-eth0" May 15 10:11:22.680920 env[1327]: time="2025-05-15T10:11:22.680837168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:11:22.680920 env[1327]: time="2025-05-15T10:11:22.680919287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:11:22.681089 env[1327]: time="2025-05-15T10:11:22.680944687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:11:22.681190 env[1327]: time="2025-05-15T10:11:22.681149366Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab pid=4164 runtime=io.containerd.runc.v2 May 15 10:11:22.694503 systemd[1]: run-containerd-runc-k8s.io-8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab-runc.Bwr9vo.mount: Deactivated successfully. May 15 10:11:22.698138 env[1327]: time="2025-05-15T10:11:22.698101569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bff68f469-qmms6,Uid:304e1844-0899-4d12-8f60-1c590160ff7b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21\"" May 15 10:11:22.715772 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:11:22.724500 env[1327]: time="2025-05-15T10:11:22.724460169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lr4rp,Uid:f0cb8081-235c-41eb-97c5-f1fef3d019bf,Namespace:calico-system,Attempt:1,} returns sandbox id \"8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab\"" May 15 10:11:22.789044 systemd[1]: Started sshd@12-10.0.0.43:22-10.0.0.1:37670.service. May 15 10:11:22.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.43:22-10.0.0.1:37670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:22.793681 kernel: kauditd_printk_skb: 58 callbacks suppressed May 15 10:11:22.793778 kernel: audit: type=1130 audit(1747303882.788:343): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.43:22-10.0.0.1:37670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:22.831000 audit[4203]: USER_ACCT pid=4203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:22.833352 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 37670 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:11:22.835067 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:11:22.833000 audit[4203]: CRED_ACQ pid=4203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:22.840019 kernel: audit: type=1101 audit(1747303882.831:344): pid=4203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:22.840069 kernel: audit: type=1103 audit(1747303882.833:345): pid=4203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:22.840099 kernel: audit: type=1006 audit(1747303882.833:346): pid=4203 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 May 15 10:11:22.839679 systemd-logind[1310]: New session 13 of user core. May 15 10:11:22.840155 systemd[1]: Started session-13.scope. May 15 10:11:22.833000 audit[4203]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe8881780 a2=3 a3=1 items=0 ppid=1 pid=4203 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:22.845284 kernel: audit: type=1300 audit(1747303882.833:346): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe8881780 a2=3 a3=1 items=0 ppid=1 pid=4203 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:22.833000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:11:22.846553 kernel: audit: type=1327 audit(1747303882.833:346): proctitle=737368643A20636F7265205B707269765D May 15 10:11:22.843000 audit[4203]: USER_START pid=4203 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:22.850650 kernel: audit: type=1105 audit(1747303882.843:347): pid=4203 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:22.850701 kernel: audit: type=1103 audit(1747303882.845:348): pid=4206 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:22.845000 audit[4206]: CRED_ACQ pid=4206 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:22.988030 sshd[4203]: pam_unix(sshd:session): session closed for user core May 15 10:11:22.987000 audit[4203]: USER_END pid=4203 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:22.990854 systemd[1]: sshd@12-10.0.0.43:22-10.0.0.1:37670.service: Deactivated successfully. May 15 10:11:22.991937 systemd[1]: session-13.scope: Deactivated successfully. May 15 10:11:22.991946 systemd-logind[1310]: Session 13 logged out. Waiting for processes to exit. May 15 10:11:22.992963 systemd-logind[1310]: Removed session 13. May 15 10:11:22.987000 audit[4203]: CRED_DISP pid=4203 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:22.995971 kernel: audit: type=1106 audit(1747303882.987:349): pid=4203 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:22.996043 kernel: audit: type=1104 audit(1747303882.987:350): pid=4203 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:22.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.43:22-10.0.0.1:37670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:23.109353 systemd-networkd[1097]: cali7ecf8aecf70: Gained IPv6LL May 15 10:11:23.371394 env[1327]: time="2025-05-15T10:11:23.371355582Z" level=info msg="StopPodSandbox for \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\"" May 15 10:11:23.445293 env[1327]: 2025-05-15 10:11:23.413 [INFO][4258] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" May 15 10:11:23.445293 env[1327]: 2025-05-15 10:11:23.413 [INFO][4258] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" iface="eth0" netns="/var/run/netns/cni-92f1456c-b46e-615d-442f-8d3d43d80e54" May 15 10:11:23.445293 env[1327]: 2025-05-15 10:11:23.413 [INFO][4258] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" iface="eth0" netns="/var/run/netns/cni-92f1456c-b46e-615d-442f-8d3d43d80e54" May 15 10:11:23.445293 env[1327]: 2025-05-15 10:11:23.413 [INFO][4258] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" iface="eth0" netns="/var/run/netns/cni-92f1456c-b46e-615d-442f-8d3d43d80e54" May 15 10:11:23.445293 env[1327]: 2025-05-15 10:11:23.413 [INFO][4258] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" May 15 10:11:23.445293 env[1327]: 2025-05-15 10:11:23.413 [INFO][4258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" May 15 10:11:23.445293 env[1327]: 2025-05-15 10:11:23.432 [INFO][4266] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" HandleID="k8s-pod-network.6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" Workload="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" May 15 10:11:23.445293 env[1327]: 2025-05-15 10:11:23.432 [INFO][4266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:23.445293 env[1327]: 2025-05-15 10:11:23.432 [INFO][4266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:23.445293 env[1327]: 2025-05-15 10:11:23.440 [WARNING][4266] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" HandleID="k8s-pod-network.6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" Workload="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" May 15 10:11:23.445293 env[1327]: 2025-05-15 10:11:23.440 [INFO][4266] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" HandleID="k8s-pod-network.6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" Workload="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" May 15 10:11:23.445293 env[1327]: 2025-05-15 10:11:23.441 [INFO][4266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:23.445293 env[1327]: 2025-05-15 10:11:23.443 [INFO][4258] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" May 15 10:11:23.445916 env[1327]: time="2025-05-15T10:11:23.445406493Z" level=info msg="TearDown network for sandbox \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\" successfully" May 15 10:11:23.445916 env[1327]: time="2025-05-15T10:11:23.445436333Z" level=info msg="StopPodSandbox for \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\" returns successfully" May 15 10:11:23.445967 kubelet[2223]: E0515 10:11:23.445744 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:23.446466 env[1327]: time="2025-05-15T10:11:23.446438209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bdvkn,Uid:ec6491f9-2d72-4fff-91b5-379e16328d47,Namespace:kube-system,Attempt:1,}" May 15 10:11:23.556368 systemd-networkd[1097]: cali516745998ee: Link UP May 15 10:11:23.558743 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 10:11:23.558789 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali516745998ee: link becomes ready May 15 10:11:23.558592 systemd-networkd[1097]: cali516745998ee: Gained carrier May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.472 [INFO][4275] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.485 [INFO][4275] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0 coredns-7db6d8ff4d- kube-system ec6491f9-2d72-4fff-91b5-379e16328d47 897 0 2025-05-15 10:10:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-bdvkn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali516745998ee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bdvkn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bdvkn-" May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.485 [INFO][4275] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bdvkn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.512 [INFO][4289] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" HandleID="k8s-pod-network.2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" Workload="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.523 [INFO][4289] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" HandleID="k8s-pod-network.2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" Workload="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c31c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-bdvkn", "timestamp":"2025-05-15 10:11:23.512831034 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.523 [INFO][4289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.523 [INFO][4289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.523 [INFO][4289] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.524 [INFO][4289] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" host="localhost" May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.528 [INFO][4289] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.532 [INFO][4289] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.534 [INFO][4289] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.536 [INFO][4289] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.536 [INFO][4289] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" host="localhost" May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.538 [INFO][4289] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825 May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.542 [INFO][4289] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" host="localhost" May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.547 [INFO][4289] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" host="localhost" May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.547 [INFO][4289] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" host="localhost" May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.547 [INFO][4289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:23.570231 env[1327]: 2025-05-15 10:11:23.547 [INFO][4289] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" HandleID="k8s-pod-network.2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" Workload="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" May 15 10:11:23.570817 env[1327]: 2025-05-15 10:11:23.549 [INFO][4275] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bdvkn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ec6491f9-2d72-4fff-91b5-379e16328d47", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-bdvkn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali516745998ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:23.570817 env[1327]: 2025-05-15 10:11:23.549 [INFO][4275] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bdvkn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" May 15 10:11:23.570817 env[1327]: 2025-05-15 10:11:23.549 [INFO][4275] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali516745998ee ContainerID="2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bdvkn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" May 15 10:11:23.570817 env[1327]: 2025-05-15 10:11:23.558 [INFO][4275] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bdvkn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" May 15 10:11:23.570817 env[1327]: 2025-05-15 10:11:23.558 [INFO][4275] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bdvkn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ec6491f9-2d72-4fff-91b5-379e16328d47", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825", Pod:"coredns-7db6d8ff4d-bdvkn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali516745998ee", MAC:"2e:3a:eb:21:40:83", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:23.570817 env[1327]: 2025-05-15 10:11:23.568 [INFO][4275] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bdvkn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" May 15 10:11:23.580837 env[1327]: time="2025-05-15T10:11:23.580773772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:11:23.580837 env[1327]: time="2025-05-15T10:11:23.580809812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:11:23.580837 env[1327]: time="2025-05-15T10:11:23.580819892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:11:23.581226 env[1327]: time="2025-05-15T10:11:23.581179730Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825 pid=4315 runtime=io.containerd.runc.v2 May 15 10:11:23.622516 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:11:23.625246 systemd-networkd[1097]: cali72082c87d9f: Gained IPv6LL May 15 10:11:23.635650 systemd[1]: run-netns-cni\x2d92f1456c\x2db46e\x2d615d\x2d442f\x2d8d3d43d80e54.mount: Deactivated successfully. May 15 10:11:23.646090 env[1327]: time="2025-05-15T10:11:23.646047282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bdvkn,Uid:ec6491f9-2d72-4fff-91b5-379e16328d47,Namespace:kube-system,Attempt:1,} returns sandbox id \"2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825\"" May 15 10:11:23.646822 kubelet[2223]: E0515 10:11:23.646785 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:23.650729 env[1327]: time="2025-05-15T10:11:23.650672182Z" level=info msg="CreateContainer within sandbox \"2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 10:11:23.662062 env[1327]: time="2025-05-15T10:11:23.662007572Z" level=info msg="CreateContainer within sandbox \"2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6a3d1331f252f23369c519130f3b7496fb80165cff73af6d811b55726acf3920\"" May 15 10:11:23.662597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1221832075.mount: Deactivated successfully. May 15 10:11:23.663695 env[1327]: time="2025-05-15T10:11:23.662652449Z" level=info msg="StartContainer for \"6a3d1331f252f23369c519130f3b7496fb80165cff73af6d811b55726acf3920\"" May 15 10:11:23.686311 systemd-networkd[1097]: cali38bd2555809: Gained IPv6LL May 15 10:11:23.707035 env[1327]: time="2025-05-15T10:11:23.706993812Z" level=info msg="StartContainer for \"6a3d1331f252f23369c519130f3b7496fb80165cff73af6d811b55726acf3920\" returns successfully" May 15 10:11:24.261345 systemd-networkd[1097]: cali9f8738b3fa3: Gained IPv6LL May 15 10:11:24.372392 env[1327]: time="2025-05-15T10:11:24.372329101Z" level=info msg="StopPodSandbox for \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\"" May 15 10:11:24.444404 env[1327]: 2025-05-15 10:11:24.411 [INFO][4430] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" May 15 10:11:24.444404 env[1327]: 2025-05-15 10:11:24.411 [INFO][4430] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" iface="eth0" netns="/var/run/netns/cni-ffd4b959-5a70-9d6d-37a2-e9c453345886" May 15 10:11:24.444404 env[1327]: 2025-05-15 10:11:24.411 [INFO][4430] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" iface="eth0" netns="/var/run/netns/cni-ffd4b959-5a70-9d6d-37a2-e9c453345886" May 15 10:11:24.444404 env[1327]: 2025-05-15 10:11:24.411 [INFO][4430] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" iface="eth0" netns="/var/run/netns/cni-ffd4b959-5a70-9d6d-37a2-e9c453345886" May 15 10:11:24.444404 env[1327]: 2025-05-15 10:11:24.411 [INFO][4430] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" May 15 10:11:24.444404 env[1327]: 2025-05-15 10:11:24.412 [INFO][4430] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" May 15 10:11:24.444404 env[1327]: 2025-05-15 10:11:24.430 [INFO][4438] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" HandleID="k8s-pod-network.d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" Workload="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" May 15 10:11:24.444404 env[1327]: 2025-05-15 10:11:24.430 [INFO][4438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:24.444404 env[1327]: 2025-05-15 10:11:24.430 [INFO][4438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:24.444404 env[1327]: 2025-05-15 10:11:24.439 [WARNING][4438] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" HandleID="k8s-pod-network.d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" Workload="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" May 15 10:11:24.444404 env[1327]: 2025-05-15 10:11:24.440 [INFO][4438] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" HandleID="k8s-pod-network.d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" Workload="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" May 15 10:11:24.444404 env[1327]: 2025-05-15 10:11:24.441 [INFO][4438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:24.444404 env[1327]: 2025-05-15 10:11:24.442 [INFO][4430] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" May 15 10:11:24.444848 env[1327]: time="2025-05-15T10:11:24.444576349Z" level=info msg="TearDown network for sandbox \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\" successfully" May 15 10:11:24.444848 env[1327]: time="2025-05-15T10:11:24.444608829Z" level=info msg="StopPodSandbox for \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\" returns successfully" May 15 10:11:24.445199 kubelet[2223]: E0515 10:11:24.444887 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:24.445485 env[1327]: time="2025-05-15T10:11:24.445461225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ddwbx,Uid:a56d6f69-05c6-49eb-910c-8dc8aa5ddf37,Namespace:kube-system,Attempt:1,}" May 15 10:11:24.505060 kubelet[2223]: E0515 10:11:24.505025 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:24.512084 kubelet[2223]: I0515 10:11:24.511966 2223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bdvkn" podStartSLOduration=31.511946298 podStartE2EDuration="31.511946298s" podCreationTimestamp="2025-05-15 10:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:11:24.51159962 +0000 UTC m=+45.237088042" watchObservedRunningTime="2025-05-15 10:11:24.511946298 +0000 UTC m=+45.237434720" May 15 10:11:24.559000 audit[4468]: NETFILTER_CFG table=filter:97 family=2 entries=18 op=nft_register_rule pid=4468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:24.559000 audit[4468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6652 a0=3 a1=ffffd9cc73a0 a2=0 a3=1 items=0 ppid=2368 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:24.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:24.569000 audit[4468]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=4468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:24.569000 audit[4468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd9cc73a0 a2=0 a3=1 items=0 ppid=2368 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:24.569000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:24.580000 audit[4470]: NETFILTER_CFG table=filter:99 family=2 entries=15 op=nft_register_rule pid=4470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:24.580000 audit[4470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4420 a0=3 a1=fffff0fc1d90 a2=0 a3=1 items=0 ppid=2368 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:24.580000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:24.585000 audit[4470]: NETFILTER_CFG table=nat:100 family=2 entries=33 op=nft_register_chain pid=4470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:24.585000 audit[4470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13428 a0=3 a1=fffff0fc1d90 a2=0 a3=1 items=0 ppid=2368 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:24.585000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:24.614708 systemd-networkd[1097]: cali7d99ecbcaf9: Link UP May 15 10:11:24.628877 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 10:11:24.628987 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7d99ecbcaf9: link becomes ready May 15 10:11:24.628949 systemd-networkd[1097]: cali7d99ecbcaf9: Gained carrier May 15 10:11:24.635198 systemd[1]: run-netns-cni\x2dffd4b959\x2d5a70\x2d9d6d\x2d37a2\x2de9c453345886.mount: Deactivated successfully. May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.474 [INFO][4445] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.487 [INFO][4445] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0 coredns-7db6d8ff4d- kube-system a56d6f69-05c6-49eb-910c-8dc8aa5ddf37 908 0 2025-05-15 10:10:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-ddwbx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7d99ecbcaf9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ddwbx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--ddwbx-" May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.487 [INFO][4445] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ddwbx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.547 [INFO][4460] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" HandleID="k8s-pod-network.9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" Workload="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.561 [INFO][4460] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" HandleID="k8s-pod-network.9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" Workload="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d8450), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-ddwbx", "timestamp":"2025-05-15 10:11:24.547688184 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.561 [INFO][4460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.562 [INFO][4460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.562 [INFO][4460] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.563 [INFO][4460] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" host="localhost" May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.572 [INFO][4460] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.580 [INFO][4460] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.582 [INFO][4460] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.584 [INFO][4460] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.584 [INFO][4460] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" host="localhost" May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.586 [INFO][4460] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.589 [INFO][4460] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" host="localhost" May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.596 [INFO][4460] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" host="localhost" May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.596 [INFO][4460] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" host="localhost" May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.596 [INFO][4460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:24.646200 env[1327]: 2025-05-15 10:11:24.596 [INFO][4460] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" HandleID="k8s-pod-network.9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" Workload="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" May 15 10:11:24.646884 env[1327]: 2025-05-15 10:11:24.603 [INFO][4445] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ddwbx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a56d6f69-05c6-49eb-910c-8dc8aa5ddf37", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-ddwbx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d99ecbcaf9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:24.646884 env[1327]: 2025-05-15 10:11:24.603 [INFO][4445] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ddwbx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" May 15 10:11:24.646884 env[1327]: 2025-05-15 10:11:24.603 [INFO][4445] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d99ecbcaf9 ContainerID="9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ddwbx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" May 15 10:11:24.646884 env[1327]: 2025-05-15 10:11:24.629 [INFO][4445] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ddwbx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" May 15 10:11:24.646884 env[1327]: 2025-05-15 10:11:24.631 [INFO][4445] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ddwbx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a56d6f69-05c6-49eb-910c-8dc8aa5ddf37", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f", Pod:"coredns-7db6d8ff4d-ddwbx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d99ecbcaf9", MAC:"1a:b7:a7:9d:b1:1c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:24.646884 env[1327]: 2025-05-15 10:11:24.644 [INFO][4445] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ddwbx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" May 15 10:11:24.704797 env[1327]: time="2025-05-15T10:11:24.704720185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:11:24.704946 env[1327]: time="2025-05-15T10:11:24.704771985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:11:24.704946 env[1327]: time="2025-05-15T10:11:24.704782185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:11:24.705049 env[1327]: time="2025-05-15T10:11:24.704982544Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f pid=4496 runtime=io.containerd.runc.v2 May 15 10:11:24.790854 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:11:24.808870 env[1327]: time="2025-05-15T10:11:24.808825495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ddwbx,Uid:a56d6f69-05c6-49eb-910c-8dc8aa5ddf37,Namespace:kube-system,Attempt:1,} returns sandbox id \"9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f\"" May 15 10:11:24.809660 kubelet[2223]: E0515 10:11:24.809639 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:24.811858 env[1327]: time="2025-05-15T10:11:24.811703643Z" level=info msg="CreateContainer within sandbox \"9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 10:11:24.832966 env[1327]: time="2025-05-15T10:11:24.832921311Z" level=info msg="CreateContainer within sandbox \"9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6f4a1e49fbdf113b2bbfcd3ba97eff11ff30b019a4662a72defe2eaf6995fdb1\"" May 15 10:11:24.834148 env[1327]: time="2025-05-15T10:11:24.833201350Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:24.834451 env[1327]: time="2025-05-15T10:11:24.833450429Z" level=info msg="StartContainer for \"6f4a1e49fbdf113b2bbfcd3ba97eff11ff30b019a4662a72defe2eaf6995fdb1\"" May 15 10:11:24.837087 env[1327]: time="2025-05-15T10:11:24.837030973Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:24.838722 env[1327]: time="2025-05-15T10:11:24.838692046Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:24.840255 env[1327]: time="2025-05-15T10:11:24.840221879Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:24.840780 env[1327]: time="2025-05-15T10:11:24.840753637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 10:11:24.841991 env[1327]: time="2025-05-15T10:11:24.841958272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 10:11:24.843746 env[1327]: time="2025-05-15T10:11:24.843696024Z" level=info msg="CreateContainer within sandbox \"039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 10:11:24.854433 env[1327]: time="2025-05-15T10:11:24.854392578Z" level=info msg="CreateContainer within sandbox \"039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"01e9f60a4082117d0df30bcc44b29264ec1d567162b7000486c266cb37a3825b\"" May 15 10:11:24.855786 env[1327]: time="2025-05-15T10:11:24.855754252Z" level=info msg="StartContainer for \"01e9f60a4082117d0df30bcc44b29264ec1d567162b7000486c266cb37a3825b\"" May 15 10:11:24.922346 env[1327]: time="2025-05-15T10:11:24.922304845Z" level=info msg="StartContainer for \"6f4a1e49fbdf113b2bbfcd3ba97eff11ff30b019a4662a72defe2eaf6995fdb1\" returns successfully" May 15 10:11:24.924404 env[1327]: time="2025-05-15T10:11:24.924371876Z" level=info msg="StartContainer for \"01e9f60a4082117d0df30bcc44b29264ec1d567162b7000486c266cb37a3825b\" returns successfully" May 15 10:11:25.029401 systemd-networkd[1097]: cali516745998ee: Gained IPv6LL May 15 10:11:25.506738 kubelet[2223]: E0515 10:11:25.506674 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:25.507330 kubelet[2223]: E0515 10:11:25.507311 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:25.517793 kubelet[2223]: I0515 10:11:25.517377 2223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6bff68f469-flc89" podStartSLOduration=22.582187453 podStartE2EDuration="25.517360533s" podCreationTimestamp="2025-05-15 10:11:00 +0000 UTC" firstStartedPulling="2025-05-15 10:11:21.906395194 +0000 UTC m=+42.631883616" lastFinishedPulling="2025-05-15 10:11:24.841568274 +0000 UTC m=+45.567056696" observedRunningTime="2025-05-15 10:11:25.517259253 +0000 UTC m=+46.242747675" watchObservedRunningTime="2025-05-15 10:11:25.517360533 +0000 UTC m=+46.242848915" May 15 10:11:25.529745 kubelet[2223]: I0515 10:11:25.529518 2223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ddwbx" podStartSLOduration=32.529504602 podStartE2EDuration="32.529504602s" podCreationTimestamp="2025-05-15 10:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:11:25.529258243 +0000 UTC m=+46.254746665" watchObservedRunningTime="2025-05-15 10:11:25.529504602 +0000 UTC m=+46.254992984" May 15 10:11:25.529000 audit[4631]: NETFILTER_CFG table=filter:101 family=2 entries=12 op=nft_register_rule pid=4631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:25.529000 audit[4631]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4420 a0=3 a1=ffffe6ca0fd0 a2=0 a3=1 items=0 ppid=2368 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:25.529000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:25.535000 audit[4631]: NETFILTER_CFG table=nat:102 family=2 entries=18 op=nft_register_rule pid=4631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:25.535000 audit[4631]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5004 a0=3 a1=ffffe6ca0fd0 a2=0 a3=1 items=0 ppid=2368 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:25.535000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:26.510575 kubelet[2223]: E0515 10:11:26.509270 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:26.551000 audit[4657]: NETFILTER_CFG table=filter:103 family=2 entries=11 op=nft_register_rule pid=4657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:26.551000 audit[4657]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffec5d4da0 a2=0 a3=1 items=0 ppid=2368 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:26.551000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:26.565367 systemd-networkd[1097]: cali7d99ecbcaf9: Gained IPv6LL May 15 10:11:26.569000 audit[4657]: NETFILTER_CFG table=nat:104 family=2 entries=61 op=nft_register_chain pid=4657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:26.569000 audit[4657]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22668 a0=3 a1=ffffec5d4da0 a2=0 a3=1 items=0 ppid=2368 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:26.569000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:26.849601 env[1327]: time="2025-05-15T10:11:26.849484144Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:26.851101 env[1327]: time="2025-05-15T10:11:26.851064498Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:26.853038 env[1327]: time="2025-05-15T10:11:26.853014010Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:26.854232 env[1327]: time="2025-05-15T10:11:26.854185605Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:26.855200 env[1327]: time="2025-05-15T10:11:26.855157201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 15 10:11:26.856140 env[1327]: time="2025-05-15T10:11:26.856052157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 10:11:26.869491 env[1327]: time="2025-05-15T10:11:26.869449222Z" level=info msg="CreateContainer within sandbox \"e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 10:11:26.879991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount606227535.mount: Deactivated successfully. May 15 10:11:26.882835 env[1327]: time="2025-05-15T10:11:26.882780168Z" level=info msg="CreateContainer within sandbox \"e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7ca420f47a6867db7e4b18647a25524dd3746d4afec7e0b3cfe963f480067ff3\"" May 15 10:11:26.883257 env[1327]: time="2025-05-15T10:11:26.883209966Z" level=info msg="StartContainer for \"7ca420f47a6867db7e4b18647a25524dd3746d4afec7e0b3cfe963f480067ff3\"" May 15 10:11:26.956230 env[1327]: time="2025-05-15T10:11:26.956170467Z" level=info msg="StartContainer for \"7ca420f47a6867db7e4b18647a25524dd3746d4afec7e0b3cfe963f480067ff3\" returns successfully" May 15 10:11:27.448987 env[1327]: time="2025-05-15T10:11:27.448941538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:27.450290 env[1327]: time="2025-05-15T10:11:27.450253653Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:27.451716 env[1327]: time="2025-05-15T10:11:27.451683967Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:27.452919 env[1327]: time="2025-05-15T10:11:27.452884882Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:27.453302 env[1327]: time="2025-05-15T10:11:27.453262001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 10:11:27.454289 env[1327]: time="2025-05-15T10:11:27.454261997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 10:11:27.455417 env[1327]: time="2025-05-15T10:11:27.455382112Z" level=info msg="CreateContainer within sandbox \"f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 10:11:27.464650 env[1327]: time="2025-05-15T10:11:27.464607636Z" level=info msg="CreateContainer within sandbox \"f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5007740fb60cdea95b60be9eaceeb9baac496f88552248b4cabf9afd5b511ed5\"" May 15 10:11:27.465307 env[1327]: time="2025-05-15T10:11:27.465268273Z" level=info msg="StartContainer for \"5007740fb60cdea95b60be9eaceeb9baac496f88552248b4cabf9afd5b511ed5\"" May 15 10:11:27.518970 kubelet[2223]: E0515 10:11:27.517775 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:27.551672 env[1327]: time="2025-05-15T10:11:27.551597369Z" level=info msg="StartContainer for \"5007740fb60cdea95b60be9eaceeb9baac496f88552248b4cabf9afd5b511ed5\" returns successfully" May 15 10:11:27.578876 kubelet[2223]: I0515 10:11:27.578153 2223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6777c65db9-lhgd2" podStartSLOduration=20.652773185 podStartE2EDuration="25.578137103s" podCreationTimestamp="2025-05-15 10:11:02 +0000 UTC" firstStartedPulling="2025-05-15 10:11:21.93054756 +0000 UTC m=+42.656035982" lastFinishedPulling="2025-05-15 10:11:26.855911478 +0000 UTC m=+47.581399900" observedRunningTime="2025-05-15 10:11:27.52366324 +0000 UTC m=+48.249151662" watchObservedRunningTime="2025-05-15 10:11:27.578137103 +0000 UTC m=+48.303625525" May 15 10:11:27.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.43:22-10.0.0.1:37686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:27.991029 systemd[1]: Started sshd@13-10.0.0.43:22-10.0.0.1:37686.service. May 15 10:11:27.992029 kernel: kauditd_printk_skb: 25 callbacks suppressed May 15 10:11:27.992094 kernel: audit: type=1130 audit(1747303887.989:360): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.43:22-10.0.0.1:37686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:28.032000 audit[4782]: USER_ACCT pid=4782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:28.035739 sshd[4782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:11:28.037960 sshd[4782]: Accepted publickey for core from 10.0.0.1 port 37686 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:11:28.033000 audit[4782]: CRED_ACQ pid=4782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:28.038234 kernel: audit: type=1101 audit(1747303888.032:361): pid=4782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:28.039727 systemd-logind[1310]: New session 14 of user core. May 15 10:11:28.040141 systemd[1]: Started session-14.scope. May 15 10:11:28.043931 kernel: audit: type=1103 audit(1747303888.033:362): pid=4782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:28.044005 kernel: audit: type=1006 audit(1747303888.034:363): pid=4782 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 May 15 10:11:28.034000 audit[4782]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc87a88e0 a2=3 a3=1 items=0 ppid=1 pid=4782 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:28.047869 kernel: audit: type=1300 audit(1747303888.034:363): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc87a88e0 a2=3 a3=1 items=0 ppid=1 pid=4782 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:28.034000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:11:28.049237 kernel: audit: type=1327 audit(1747303888.034:363): proctitle=737368643A20636F7265205B707269765D May 15 10:11:28.042000 audit[4782]: USER_START pid=4782 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:28.053164 kernel: audit: type=1105 audit(1747303888.042:364): pid=4782 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:28.044000 audit[4785]: CRED_ACQ pid=4785 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:28.056247 kernel: audit: type=1103 audit(1747303888.044:365): pid=4785 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:28.214669 sshd[4782]: pam_unix(sshd:session): session closed for user core May 15 10:11:28.214000 audit[4782]: USER_END pid=4782 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:28.214000 audit[4782]: CRED_DISP pid=4782 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:28.224190 kernel: audit: type=1106 audit(1747303888.214:366): pid=4782 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:28.224311 kernel: audit: type=1104 audit(1747303888.214:367): pid=4782 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:28.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.43:22-10.0.0.1:37686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:28.221110 systemd-logind[1310]: Session 14 logged out. Waiting for processes to exit. May 15 10:11:28.221263 systemd[1]: sshd@13-10.0.0.43:22-10.0.0.1:37686.service: Deactivated successfully. May 15 10:11:28.222126 systemd[1]: session-14.scope: Deactivated successfully. May 15 10:11:28.222578 systemd-logind[1310]: Removed session 14. May 15 10:11:28.532044 kubelet[2223]: I0515 10:11:28.531984 2223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6bff68f469-qmms6" podStartSLOduration=23.77750376 podStartE2EDuration="28.531967596s" podCreationTimestamp="2025-05-15 10:11:00 +0000 UTC" firstStartedPulling="2025-05-15 10:11:22.699563202 +0000 UTC m=+43.425051624" lastFinishedPulling="2025-05-15 10:11:27.454027078 +0000 UTC m=+48.179515460" observedRunningTime="2025-05-15 10:11:28.531681037 +0000 UTC m=+49.257169459" watchObservedRunningTime="2025-05-15 10:11:28.531967596 +0000 UTC m=+49.257456018" May 15 10:11:28.544000 audit[4821]: NETFILTER_CFG table=filter:105 family=2 entries=10 op=nft_register_rule pid=4821 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:28.544000 audit[4821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffff0acfa40 a2=0 a3=1 items=0 ppid=2368 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:28.544000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:28.553000 audit[4821]: NETFILTER_CFG table=nat:106 family=2 entries=28 op=nft_register_rule pid=4821 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:28.553000 audit[4821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8580 a0=3 a1=fffff0acfa40 a2=0 a3=1 items=0 ppid=2368 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:28.553000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:28.682684 kubelet[2223]: I0515 10:11:28.682647 2223 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 10:11:28.683642 kubelet[2223]: E0515 10:11:28.683621 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:28.741000 audit[4823]: NETFILTER_CFG table=filter:107 family=2 entries=9 op=nft_register_rule pid=4823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:28.741000 audit[4823]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffebb7c710 a2=0 a3=1 items=0 ppid=2368 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:28.741000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:28.747000 audit[4823]: NETFILTER_CFG table=nat:108 family=2 entries=27 op=nft_register_chain pid=4823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:28.747000 audit[4823]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffebb7c710 a2=0 a3=1 items=0 ppid=2368 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:28.747000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:29.252000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.252000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.252000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.252000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.252000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.252000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.252000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.252000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.252000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.252000 audit: BPF prog-id=10 op=LOAD May 15 10:11:29.252000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeb6af898 a2=98 a3=ffffeb6af888 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.252000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.252000 audit: BPF prog-id=10 op=UNLOAD May 15 10:11:29.256000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit: BPF prog-id=11 op=LOAD May 15 10:11:29.256000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffeb6af528 a2=74 a3=95 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.256000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.256000 audit: BPF prog-id=11 op=UNLOAD May 15 10:11:29.256000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.256000 audit: BPF prog-id=12 op=LOAD May 15 10:11:29.256000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffeb6af588 a2=94 a3=2 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.256000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.256000 audit: BPF prog-id=12 op=UNLOAD May 15 10:11:29.339665 env[1327]: time="2025-05-15T10:11:29.339438856Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:29.341244 env[1327]: time="2025-05-15T10:11:29.341205810Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:29.343679 env[1327]: time="2025-05-15T10:11:29.343649000Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:29.345238 env[1327]: time="2025-05-15T10:11:29.345183475Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:29.345790 env[1327]: time="2025-05-15T10:11:29.345747552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 15 10:11:29.350105 env[1327]: time="2025-05-15T10:11:29.350058416Z" level=info msg="CreateContainer within sandbox \"8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 10:11:29.354000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.354000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.354000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.354000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.354000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.354000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.354000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.354000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.354000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.354000 audit: BPF prog-id=13 op=LOAD May 15 10:11:29.354000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffeb6af548 a2=40 a3=ffffeb6af578 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.354000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.354000 audit: BPF prog-id=13 op=UNLOAD May 15 10:11:29.354000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.354000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffeb6af660 a2=50 a3=0 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.354000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.364870 env[1327]: time="2025-05-15T10:11:29.364824800Z" level=info msg="CreateContainer within sandbox \"8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"027162545f04ec895dde7f4ec0dff32a7ede9a606de4c5a2c0fe03a25c9de9f9\"" May 15 10:11:29.369057 env[1327]: time="2025-05-15T10:11:29.369019584Z" level=info msg="StartContainer for \"027162545f04ec895dde7f4ec0dff32a7ede9a606de4c5a2c0fe03a25c9de9f9\"" May 15 10:11:29.374000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.374000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffeb6af5b8 a2=28 a3=ffffeb6af6e8 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.374000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.374000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffeb6af5e8 a2=28 a3=ffffeb6af718 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.374000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.374000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffeb6af498 a2=28 a3=ffffeb6af5c8 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.374000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.374000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffeb6af608 a2=28 a3=ffffeb6af738 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.374000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.374000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffeb6af5e8 a2=28 a3=ffffeb6af718 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.374000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.374000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffeb6af5d8 a2=28 a3=ffffeb6af708 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.374000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.374000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffeb6af608 a2=28 a3=ffffeb6af738 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.374000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.374000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffeb6af5e8 a2=28 a3=ffffeb6af718 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.374000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.374000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffeb6af608 a2=28 a3=ffffeb6af738 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.374000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.374000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffeb6af5d8 a2=28 a3=ffffeb6af708 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.374000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.374000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffeb6af658 a2=28 a3=ffffeb6af798 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.374000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.374000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffeb6af390 a2=50 a3=0 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.375000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit: BPF prog-id=14 op=LOAD May 15 10:11:29.375000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffeb6af398 a2=94 a3=5 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.375000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.375000 audit: BPF prog-id=14 op=UNLOAD May 15 10:11:29.375000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffeb6af4a0 a2=50 a3=0 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.375000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.375000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffeb6af5e8 a2=4 a3=3 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.375000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.375000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.375000 audit[4844]: AVC avc: denied { confidentiality } for pid=4844 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 10:11:29.375000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffeb6af5c8 a2=94 a3=6 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.375000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.376000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { confidentiality } for pid=4844 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 10:11:29.376000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffeb6aed98 a2=94 a3=83 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.376000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.376000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { perfmon } for pid=4844 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { bpf } for pid=4844 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.376000 audit[4844]: AVC avc: denied { confidentiality } for pid=4844 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 10:11:29.376000 audit[4844]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffeb6aed98 a2=94 a3=83 items=0 ppid=4826 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.376000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 10:11:29.404299 systemd[1]: run-containerd-runc-k8s.io-027162545f04ec895dde7f4ec0dff32a7ede9a606de4c5a2c0fe03a25c9de9f9-runc.JXfNmB.mount: Deactivated successfully. May 15 10:11:29.410000 audit[4863]: AVC avc: denied { bpf } for pid=4863 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { bpf } for pid=4863 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { perfmon } for pid=4863 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { perfmon } for pid=4863 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { perfmon } for pid=4863 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { perfmon } for pid=4863 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { perfmon } for pid=4863 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { bpf } for pid=4863 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { bpf } for pid=4863 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit: BPF prog-id=15 op=LOAD May 15 10:11:29.410000 audit[4863]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff5dff308 a2=98 a3=fffff5dff2f8 items=0 ppid=4826 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.410000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 15 10:11:29.410000 audit: BPF prog-id=15 op=UNLOAD May 15 10:11:29.410000 audit[4863]: AVC avc: denied { bpf } for pid=4863 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { bpf } for pid=4863 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { perfmon } for pid=4863 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { perfmon } for pid=4863 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { perfmon } for pid=4863 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { perfmon } for pid=4863 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { perfmon } for pid=4863 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { bpf } for pid=4863 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { bpf } for pid=4863 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit: BPF prog-id=16 op=LOAD May 15 10:11:29.410000 audit[4863]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff5dff1b8 a2=74 a3=95 items=0 ppid=4826 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.410000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 15 10:11:29.410000 audit: BPF prog-id=16 op=UNLOAD May 15 10:11:29.410000 audit[4863]: AVC avc: denied { bpf } for pid=4863 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { bpf } for pid=4863 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { perfmon } for pid=4863 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { perfmon } for pid=4863 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { perfmon } for pid=4863 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { perfmon } for pid=4863 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { perfmon } for pid=4863 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { bpf } for pid=4863 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit[4863]: AVC avc: denied { bpf } for pid=4863 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.410000 audit: BPF prog-id=17 op=LOAD May 15 10:11:29.410000 audit[4863]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff5dff1e8 a2=40 a3=fffff5dff218 items=0 ppid=4826 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.410000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 15 10:11:29.410000 audit: BPF prog-id=17 op=UNLOAD May 15 10:11:29.487438 env[1327]: time="2025-05-15T10:11:29.487388617Z" level=info msg="StartContainer for \"027162545f04ec895dde7f4ec0dff32a7ede9a606de4c5a2c0fe03a25c9de9f9\" returns successfully" May 15 10:11:29.489081 env[1327]: time="2025-05-15T10:11:29.489051691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 10:11:29.524693 kubelet[2223]: I0515 10:11:29.524599 2223 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 10:11:29.527645 kubelet[2223]: E0515 10:11:29.525382 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:29.579676 systemd-networkd[1097]: vxlan.calico: Link UP May 15 10:11:29.579682 systemd-networkd[1097]: vxlan.calico: Gained carrier May 15 10:11:29.601000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.601000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.601000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.601000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.601000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.601000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.601000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.601000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.601000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.601000 audit: BPF prog-id=18 op=LOAD May 15 10:11:29.601000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffff899968 a2=98 a3=ffffff899958 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.601000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.601000 audit: BPF prog-id=18 op=UNLOAD May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit: BPF prog-id=19 op=LOAD May 15 10:11:29.604000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffff899648 a2=74 a3=95 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.604000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.604000 audit: BPF prog-id=19 op=UNLOAD May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit: BPF prog-id=20 op=LOAD May 15 10:11:29.604000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffff8996a8 a2=94 a3=2 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.604000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.604000 audit: BPF prog-id=20 op=UNLOAD May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffff8996d8 a2=28 a3=ffffff899808 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.604000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffff899708 a2=28 a3=ffffff899838 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.604000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffff8995b8 a2=28 a3=ffffff8996e8 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.604000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffff899728 a2=28 a3=ffffff899858 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.604000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffff899708 a2=28 a3=ffffff899838 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.604000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffff8996f8 a2=28 a3=ffffff899828 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.604000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffff899728 a2=28 a3=ffffff899858 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.604000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffff899708 a2=28 a3=ffffff899838 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.604000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffff899728 a2=28 a3=ffffff899858 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.604000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffff8996f8 a2=28 a3=ffffff899828 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.604000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffff899778 a2=28 a3=ffffff8998b8 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.604000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.604000 audit: BPF prog-id=21 op=LOAD May 15 10:11:29.604000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffff899598 a2=40 a3=ffffff8995c8 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.604000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.604000 audit: BPF prog-id=21 op=UNLOAD May 15 10:11:29.606000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffff8995c0 a2=50 a3=0 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.606000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffff8995c0 a2=50 a3=0 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.606000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit: BPF prog-id=22 op=LOAD May 15 10:11:29.606000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffff898d28 a2=94 a3=2 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.606000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.606000 audit: BPF prog-id=22 op=UNLOAD May 15 10:11:29.606000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { perfmon } for pid=4929 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit[4929]: AVC avc: denied { bpf } for pid=4929 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.606000 audit: BPF prog-id=23 op=LOAD May 15 10:11:29.606000 audit[4929]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffff898eb8 a2=94 a3=30 items=0 ppid=4826 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.606000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit: BPF prog-id=24 op=LOAD May 15 10:11:29.610000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf439868 a2=98 a3=ffffcf439858 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.610000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.610000 audit: BPF prog-id=24 op=UNLOAD May 15 10:11:29.610000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit: BPF prog-id=25 op=LOAD May 15 10:11:29.610000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcf4394f8 a2=74 a3=95 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.610000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.610000 audit: BPF prog-id=25 op=UNLOAD May 15 10:11:29.610000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.610000 audit: BPF prog-id=26 op=LOAD May 15 10:11:29.610000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcf439558 a2=94 a3=2 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.610000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.610000 audit: BPF prog-id=26 op=UNLOAD May 15 10:11:29.706000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.706000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.706000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.706000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.706000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.706000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.706000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.706000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.706000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.706000 audit: BPF prog-id=27 op=LOAD May 15 10:11:29.706000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcf439518 a2=40 a3=ffffcf439548 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.706000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.706000 audit: BPF prog-id=27 op=UNLOAD May 15 10:11:29.706000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.706000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffcf439630 a2=50 a3=0 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.706000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcf439588 a2=28 a3=ffffcf4396b8 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcf4395b8 a2=28 a3=ffffcf4396e8 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcf439468 a2=28 a3=ffffcf439598 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcf4395d8 a2=28 a3=ffffcf439708 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcf4395b8 a2=28 a3=ffffcf4396e8 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcf4395a8 a2=28 a3=ffffcf4396d8 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcf4395d8 a2=28 a3=ffffcf439708 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcf4395b8 a2=28 a3=ffffcf4396e8 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcf4395d8 a2=28 a3=ffffcf439708 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcf4395a8 a2=28 a3=ffffcf4396d8 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcf439628 a2=28 a3=ffffcf439768 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcf439360 a2=50 a3=0 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit: BPF prog-id=28 op=LOAD May 15 10:11:29.715000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcf439368 a2=94 a3=5 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.715000 audit: BPF prog-id=28 op=UNLOAD May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcf439470 a2=50 a3=0 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffcf4395b8 a2=4 a3=3 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { confidentiality } for pid=4933 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 10:11:29.715000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcf439598 a2=94 a3=6 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.715000 audit[4933]: AVC avc: denied { confidentiality } for pid=4933 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 10:11:29.715000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcf438d68 a2=94 a3=83 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.716000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.716000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.716000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.716000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.716000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.716000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.716000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.716000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.716000 audit[4933]: AVC avc: denied { perfmon } for pid=4933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.716000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.716000 audit[4933]: AVC avc: denied { confidentiality } for pid=4933 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 10:11:29.716000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcf438d68 a2=94 a3=83 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.716000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.716000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.716000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcf43a7a8 a2=10 a3=ffffcf43a898 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.716000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.716000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.716000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcf43a668 a2=10 a3=ffffcf43a758 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.716000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.716000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.716000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcf43a5d8 a2=10 a3=ffffcf43a758 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.716000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.716000 audit[4933]: AVC avc: denied { bpf } for pid=4933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 10:11:29.716000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcf43a5d8 a2=10 a3=ffffcf43a758 items=0 ppid=4826 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.716000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 10:11:29.728000 audit: BPF prog-id=23 op=UNLOAD May 15 10:11:29.782000 audit[4985]: NETFILTER_CFG table=mangle:109 family=2 entries=16 op=nft_register_chain pid=4985 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 10:11:29.782000 audit[4985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffccd26660 a2=0 a3=ffff951a4fa8 items=0 ppid=4826 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.782000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 10:11:29.788000 audit[4988]: NETFILTER_CFG table=nat:110 family=2 entries=15 op=nft_register_chain pid=4988 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 10:11:29.788000 audit[4988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe2000a70 a2=0 a3=ffffb96d4fa8 items=0 ppid=4826 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.788000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 10:11:29.792000 audit[4984]: NETFILTER_CFG table=raw:111 family=2 entries=21 op=nft_register_chain pid=4984 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 10:11:29.792000 audit[4984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffff3df4210 a2=0 a3=ffffae620fa8 items=0 ppid=4826 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.792000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 10:11:29.793000 audit[4987]: NETFILTER_CFG table=filter:112 family=2 entries=215 op=nft_register_chain pid=4987 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 10:11:29.793000 audit[4987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=125776 a0=3 a1=ffffc9727a90 a2=0 a3=ffff87f2dfa8 items=0 ppid=4826 pid=4987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:29.793000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 10:11:31.148616 env[1327]: time="2025-05-15T10:11:31.148570864Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:31.150177 env[1327]: time="2025-05-15T10:11:31.150151979Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:31.152068 env[1327]: time="2025-05-15T10:11:31.152041612Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:31.154476 env[1327]: time="2025-05-15T10:11:31.154435283Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:11:31.154946 env[1327]: time="2025-05-15T10:11:31.154917962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 15 10:11:31.157505 env[1327]: time="2025-05-15T10:11:31.157472192Z" level=info msg="CreateContainer within sandbox \"8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 10:11:31.171719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4241396518.mount: Deactivated successfully. May 15 10:11:31.173386 systemd-networkd[1097]: vxlan.calico: Gained IPv6LL May 15 10:11:31.175836 env[1327]: time="2025-05-15T10:11:31.175794487Z" level=info msg="CreateContainer within sandbox \"8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"600cf68aab325bc13262bf1b6e791c9d04596e1263dc94591e8a5ae5c20362e7\"" May 15 10:11:31.176426 env[1327]: time="2025-05-15T10:11:31.176397725Z" level=info msg="StartContainer for \"600cf68aab325bc13262bf1b6e791c9d04596e1263dc94591e8a5ae5c20362e7\"" May 15 10:11:31.287952 env[1327]: time="2025-05-15T10:11:31.287903085Z" level=info msg="StartContainer for \"600cf68aab325bc13262bf1b6e791c9d04596e1263dc94591e8a5ae5c20362e7\" returns successfully" May 15 10:11:31.455161 kubelet[2223]: I0515 10:11:31.455056 2223 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 10:11:31.456962 kubelet[2223]: I0515 10:11:31.456935 2223 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 10:11:31.545873 kubelet[2223]: I0515 10:11:31.545812 2223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lr4rp" podStartSLOduration=21.115523765 podStartE2EDuration="29.54579664s" podCreationTimestamp="2025-05-15 10:11:02 +0000 UTC" firstStartedPulling="2025-05-15 10:11:22.725654603 +0000 UTC m=+43.451143025" lastFinishedPulling="2025-05-15 10:11:31.155927478 +0000 UTC m=+51.881415900" observedRunningTime="2025-05-15 10:11:31.545296961 +0000 UTC m=+52.270785383" watchObservedRunningTime="2025-05-15 10:11:31.54579664 +0000 UTC m=+52.271285062" May 15 10:11:33.215835 systemd[1]: Started sshd@14-10.0.0.43:22-10.0.0.1:35842.service. May 15 10:11:33.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.43:22-10.0.0.1:35842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:33.217064 kernel: kauditd_printk_skb: 493 callbacks suppressed May 15 10:11:33.217134 kernel: audit: type=1130 audit(1747303893.214:468): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.43:22-10.0.0.1:35842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:33.261000 audit[5033]: USER_ACCT pid=5033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:33.263153 sshd[5033]: Accepted publickey for core from 10.0.0.1 port 35842 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:11:33.264902 sshd[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:11:33.263000 audit[5033]: CRED_ACQ pid=5033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:33.268847 systemd-logind[1310]: New session 15 of user core. May 15 10:11:33.269568 kernel: audit: type=1101 audit(1747303893.261:469): pid=5033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:33.269607 kernel: audit: type=1103 audit(1747303893.263:470): pid=5033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:33.269625 kernel: audit: type=1006 audit(1747303893.263:471): pid=5033 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 May 15 10:11:33.269700 systemd[1]: Started session-15.scope. May 15 10:11:33.263000 audit[5033]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc119b640 a2=3 a3=1 items=0 ppid=1 pid=5033 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:33.275301 kernel: audit: type=1300 audit(1747303893.263:471): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc119b640 a2=3 a3=1 items=0 ppid=1 pid=5033 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:33.275377 kernel: audit: type=1327 audit(1747303893.263:471): proctitle=737368643A20636F7265205B707269765D May 15 10:11:33.263000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:11:33.276204 kernel: audit: type=1105 audit(1747303893.272:472): pid=5033 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:33.272000 audit[5033]: USER_START pid=5033 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:33.273000 audit[5036]: CRED_ACQ pid=5036 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:33.282520 kernel: audit: type=1103 audit(1747303893.273:473): pid=5036 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:33.413005 kubelet[2223]: E0515 10:11:33.412090 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:33.460286 sshd[5033]: pam_unix(sshd:session): session closed for user core May 15 10:11:33.460000 audit[5033]: USER_END pid=5033 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:33.463272 systemd[1]: sshd@14-10.0.0.43:22-10.0.0.1:35842.service: Deactivated successfully. May 15 10:11:33.464309 systemd-logind[1310]: Session 15 logged out. Waiting for processes to exit. May 15 10:11:33.464360 systemd[1]: session-15.scope: Deactivated successfully. May 15 10:11:33.460000 audit[5033]: CRED_DISP pid=5033 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:33.466085 systemd-logind[1310]: Removed session 15. May 15 10:11:33.468549 kernel: audit: type=1106 audit(1747303893.460:474): pid=5033 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:33.468650 kernel: audit: type=1104 audit(1747303893.460:475): pid=5033 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:33.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.43:22-10.0.0.1:35842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:38.463518 systemd[1]: Started sshd@15-10.0.0.43:22-10.0.0.1:35856.service. May 15 10:11:38.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.43:22-10.0.0.1:35856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:38.470532 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 10:11:38.470607 kernel: audit: type=1130 audit(1747303898.462:477): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.43:22-10.0.0.1:35856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:38.500000 audit[5076]: USER_ACCT pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.501589 sshd[5076]: Accepted publickey for core from 10.0.0.1 port 35856 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:11:38.501000 audit[5076]: CRED_ACQ pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.502691 sshd[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:11:38.509918 systemd[1]: Started session-16.scope. May 15 10:11:38.510112 systemd-logind[1310]: New session 16 of user core. May 15 10:11:38.514902 kernel: audit: type=1101 audit(1747303898.500:478): pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.515025 kernel: audit: type=1103 audit(1747303898.501:479): pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.515049 kernel: audit: type=1006 audit(1747303898.501:480): pid=5076 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 May 15 10:11:38.519734 kernel: audit: type=1300 audit(1747303898.501:480): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc4658110 a2=3 a3=1 items=0 ppid=1 pid=5076 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:38.501000 audit[5076]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc4658110 a2=3 a3=1 items=0 ppid=1 pid=5076 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:38.524449 kernel: audit: type=1327 audit(1747303898.501:480): proctitle=737368643A20636F7265205B707269765D May 15 10:11:38.501000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:11:38.521000 audit[5076]: USER_START pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.532355 kernel: audit: type=1105 audit(1747303898.521:481): pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.532446 kernel: audit: type=1103 audit(1747303898.524:482): pid=5079 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.524000 audit[5079]: CRED_ACQ pid=5079 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.647319 sshd[5076]: pam_unix(sshd:session): session closed for user core May 15 10:11:38.649767 systemd[1]: Started sshd@16-10.0.0.43:22-10.0.0.1:35868.service. May 15 10:11:38.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.43:22-10.0.0.1:35868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:38.654249 kernel: audit: type=1130 audit(1747303898.648:483): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.43:22-10.0.0.1:35868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:38.655960 systemd[1]: sshd@15-10.0.0.43:22-10.0.0.1:35856.service: Deactivated successfully. May 15 10:11:38.657138 systemd-logind[1310]: Session 16 logged out. Waiting for processes to exit. May 15 10:11:38.657157 systemd[1]: session-16.scope: Deactivated successfully. May 15 10:11:38.653000 audit[5076]: USER_END pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.653000 audit[5076]: CRED_DISP pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.43:22-10.0.0.1:35856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:38.662639 kernel: audit: type=1106 audit(1747303898.653:484): pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.663698 systemd-logind[1310]: Removed session 16. May 15 10:11:38.689000 audit[5088]: USER_ACCT pid=5088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.691411 sshd[5088]: Accepted publickey for core from 10.0.0.1 port 35868 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:11:38.690000 audit[5088]: CRED_ACQ pid=5088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.691000 audit[5088]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffea7fbe50 a2=3 a3=1 items=0 ppid=1 pid=5088 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:38.691000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:11:38.692570 sshd[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:11:38.696031 systemd-logind[1310]: New session 17 of user core. May 15 10:11:38.696985 systemd[1]: Started session-17.scope. May 15 10:11:38.700000 audit[5088]: USER_START pid=5088 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.702000 audit[5093]: CRED_ACQ pid=5093 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.944567 sshd[5088]: pam_unix(sshd:session): session closed for user core May 15 10:11:38.945000 audit[5088]: USER_END pid=5088 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.945000 audit[5088]: CRED_DISP pid=5088 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.947420 systemd[1]: Started sshd@17-10.0.0.43:22-10.0.0.1:35878.service. May 15 10:11:38.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.43:22-10.0.0.1:35878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:38.951778 systemd[1]: sshd@16-10.0.0.43:22-10.0.0.1:35868.service: Deactivated successfully. May 15 10:11:38.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.43:22-10.0.0.1:35868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:38.952845 systemd[1]: session-17.scope: Deactivated successfully. May 15 10:11:38.953006 systemd-logind[1310]: Session 17 logged out. Waiting for processes to exit. May 15 10:11:38.954250 systemd-logind[1310]: Removed session 17. May 15 10:11:38.989000 audit[5100]: USER_ACCT pid=5100 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.990808 sshd[5100]: Accepted publickey for core from 10.0.0.1 port 35878 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:11:38.990000 audit[5100]: CRED_ACQ pid=5100 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:38.990000 audit[5100]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe8e00270 a2=3 a3=1 items=0 ppid=1 pid=5100 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:38.990000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:11:38.991930 sshd[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:11:38.995309 systemd-logind[1310]: New session 18 of user core. May 15 10:11:38.996191 systemd[1]: Started session-18.scope. May 15 10:11:38.999000 audit[5100]: USER_START pid=5100 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:39.000000 audit[5105]: CRED_ACQ pid=5105 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:39.340930 env[1327]: time="2025-05-15T10:11:39.340890736Z" level=info msg="StopPodSandbox for \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\"" May 15 10:11:39.429144 env[1327]: 2025-05-15 10:11:39.378 [WARNING][5130] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ec6491f9-2d72-4fff-91b5-379e16328d47", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825", Pod:"coredns-7db6d8ff4d-bdvkn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali516745998ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:39.429144 env[1327]: 2025-05-15 10:11:39.378 [INFO][5130] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" May 15 10:11:39.429144 env[1327]: 2025-05-15 10:11:39.378 [INFO][5130] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" iface="eth0" netns="" May 15 10:11:39.429144 env[1327]: 2025-05-15 10:11:39.378 [INFO][5130] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" May 15 10:11:39.429144 env[1327]: 2025-05-15 10:11:39.378 [INFO][5130] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" May 15 10:11:39.429144 env[1327]: 2025-05-15 10:11:39.410 [INFO][5140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" HandleID="k8s-pod-network.6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" Workload="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" May 15 10:11:39.429144 env[1327]: 2025-05-15 10:11:39.410 [INFO][5140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:39.429144 env[1327]: 2025-05-15 10:11:39.410 [INFO][5140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:39.429144 env[1327]: 2025-05-15 10:11:39.422 [WARNING][5140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" HandleID="k8s-pod-network.6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" Workload="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" May 15 10:11:39.429144 env[1327]: 2025-05-15 10:11:39.422 [INFO][5140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" HandleID="k8s-pod-network.6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" Workload="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" May 15 10:11:39.429144 env[1327]: 2025-05-15 10:11:39.424 [INFO][5140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:39.429144 env[1327]: 2025-05-15 10:11:39.426 [INFO][5130] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" May 15 10:11:39.429144 env[1327]: time="2025-05-15T10:11:39.428787479Z" level=info msg="TearDown network for sandbox \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\" successfully" May 15 10:11:39.429144 env[1327]: time="2025-05-15T10:11:39.428811919Z" level=info msg="StopPodSandbox for \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\" returns successfully" May 15 10:11:39.430052 env[1327]: time="2025-05-15T10:11:39.429950796Z" level=info msg="RemovePodSandbox for \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\"" May 15 10:11:39.430052 env[1327]: time="2025-05-15T10:11:39.429984436Z" level=info msg="Forcibly stopping sandbox \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\"" May 15 10:11:39.661925 env[1327]: 2025-05-15 10:11:39.548 [WARNING][5170] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ec6491f9-2d72-4fff-91b5-379e16328d47", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2764e717c639f57032e727656f3baf6e08ac6ba0d310d3cad144a1746fdf9825", Pod:"coredns-7db6d8ff4d-bdvkn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali516745998ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:39.661925 env[1327]: 2025-05-15 10:11:39.549 [INFO][5170] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" May 15 10:11:39.661925 env[1327]: 2025-05-15 10:11:39.549 [INFO][5170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" iface="eth0" netns="" May 15 10:11:39.661925 env[1327]: 2025-05-15 10:11:39.549 [INFO][5170] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" May 15 10:11:39.661925 env[1327]: 2025-05-15 10:11:39.549 [INFO][5170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" May 15 10:11:39.661925 env[1327]: 2025-05-15 10:11:39.638 [INFO][5179] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" HandleID="k8s-pod-network.6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" Workload="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" May 15 10:11:39.661925 env[1327]: 2025-05-15 10:11:39.638 [INFO][5179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:39.661925 env[1327]: 2025-05-15 10:11:39.638 [INFO][5179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:39.661925 env[1327]: 2025-05-15 10:11:39.656 [WARNING][5179] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" HandleID="k8s-pod-network.6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" Workload="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" May 15 10:11:39.661925 env[1327]: 2025-05-15 10:11:39.656 [INFO][5179] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" HandleID="k8s-pod-network.6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" Workload="localhost-k8s-coredns--7db6d8ff4d--bdvkn-eth0" May 15 10:11:39.661925 env[1327]: 2025-05-15 10:11:39.658 [INFO][5179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:39.661925 env[1327]: 2025-05-15 10:11:39.660 [INFO][5170] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41" May 15 10:11:39.661925 env[1327]: time="2025-05-15T10:11:39.661901117Z" level=info msg="TearDown network for sandbox \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\" successfully" May 15 10:11:39.668742 env[1327]: time="2025-05-15T10:11:39.668698458Z" level=info msg="RemovePodSandbox \"6a7940bbe68d74ea337a3a02082f83d83d89187a6716cec7184d2fb68469dc41\" returns successfully" May 15 10:11:39.669340 env[1327]: time="2025-05-15T10:11:39.669311216Z" level=info msg="StopPodSandbox for \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\"" May 15 10:11:39.754537 env[1327]: 2025-05-15 10:11:39.705 [WARNING][5201] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lr4rp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0cb8081-235c-41eb-97c5-f1fef3d019bf", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab", Pod:"csi-node-driver-lr4rp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38bd2555809", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:39.754537 env[1327]: 2025-05-15 10:11:39.705 [INFO][5201] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" May 15 10:11:39.754537 env[1327]: 2025-05-15 10:11:39.705 [INFO][5201] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" iface="eth0" netns="" May 15 10:11:39.754537 env[1327]: 2025-05-15 10:11:39.706 [INFO][5201] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" May 15 10:11:39.754537 env[1327]: 2025-05-15 10:11:39.706 [INFO][5201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" May 15 10:11:39.754537 env[1327]: 2025-05-15 10:11:39.735 [INFO][5210] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" HandleID="k8s-pod-network.f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" Workload="localhost-k8s-csi--node--driver--lr4rp-eth0" May 15 10:11:39.754537 env[1327]: 2025-05-15 10:11:39.735 [INFO][5210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:39.754537 env[1327]: 2025-05-15 10:11:39.736 [INFO][5210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:39.754537 env[1327]: 2025-05-15 10:11:39.746 [WARNING][5210] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" HandleID="k8s-pod-network.f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" Workload="localhost-k8s-csi--node--driver--lr4rp-eth0" May 15 10:11:39.754537 env[1327]: 2025-05-15 10:11:39.746 [INFO][5210] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" HandleID="k8s-pod-network.f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" Workload="localhost-k8s-csi--node--driver--lr4rp-eth0" May 15 10:11:39.754537 env[1327]: 2025-05-15 10:11:39.747 [INFO][5210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:39.754537 env[1327]: 2025-05-15 10:11:39.751 [INFO][5201] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" May 15 10:11:39.754992 env[1327]: time="2025-05-15T10:11:39.754551127Z" level=info msg="TearDown network for sandbox \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\" successfully" May 15 10:11:39.754992 env[1327]: time="2025-05-15T10:11:39.754584646Z" level=info msg="StopPodSandbox for \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\" returns successfully" May 15 10:11:39.755426 env[1327]: time="2025-05-15T10:11:39.755403924Z" level=info msg="RemovePodSandbox for \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\"" May 15 10:11:39.755509 env[1327]: time="2025-05-15T10:11:39.755437084Z" level=info msg="Forcibly stopping sandbox \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\"" May 15 10:11:39.829340 env[1327]: 2025-05-15 10:11:39.793 [WARNING][5233] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lr4rp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0cb8081-235c-41eb-97c5-f1fef3d019bf", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8617c093a26c83278ea8e764707f6c868627b23acb23e79ad345ad4bd06ad4ab", Pod:"csi-node-driver-lr4rp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38bd2555809", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:39.829340 env[1327]: 2025-05-15 10:11:39.794 [INFO][5233] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" May 15 10:11:39.829340 env[1327]: 2025-05-15 10:11:39.794 [INFO][5233] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" iface="eth0" netns="" May 15 10:11:39.829340 env[1327]: 2025-05-15 10:11:39.794 [INFO][5233] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" May 15 10:11:39.829340 env[1327]: 2025-05-15 10:11:39.794 [INFO][5233] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" May 15 10:11:39.829340 env[1327]: 2025-05-15 10:11:39.816 [INFO][5242] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" HandleID="k8s-pod-network.f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" Workload="localhost-k8s-csi--node--driver--lr4rp-eth0" May 15 10:11:39.829340 env[1327]: 2025-05-15 10:11:39.816 [INFO][5242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:39.829340 env[1327]: 2025-05-15 10:11:39.816 [INFO][5242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:39.829340 env[1327]: 2025-05-15 10:11:39.824 [WARNING][5242] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" HandleID="k8s-pod-network.f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" Workload="localhost-k8s-csi--node--driver--lr4rp-eth0" May 15 10:11:39.829340 env[1327]: 2025-05-15 10:11:39.824 [INFO][5242] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" HandleID="k8s-pod-network.f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" Workload="localhost-k8s-csi--node--driver--lr4rp-eth0" May 15 10:11:39.829340 env[1327]: 2025-05-15 10:11:39.826 [INFO][5242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:39.829340 env[1327]: 2025-05-15 10:11:39.827 [INFO][5233] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd" May 15 10:11:39.829779 env[1327]: time="2025-05-15T10:11:39.829370788Z" level=info msg="TearDown network for sandbox \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\" successfully" May 15 10:11:39.832275 env[1327]: time="2025-05-15T10:11:39.832227659Z" level=info msg="RemovePodSandbox \"f7ef4b281c290bdce869a489050c817fe08919b2b196e76da40b11a0fa3940cd\" returns successfully" May 15 10:11:39.832764 env[1327]: time="2025-05-15T10:11:39.832739338Z" level=info msg="StopPodSandbox for \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\"" May 15 10:11:39.902578 env[1327]: 2025-05-15 10:11:39.868 [WARNING][5276] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a56d6f69-05c6-49eb-910c-8dc8aa5ddf37", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f", Pod:"coredns-7db6d8ff4d-ddwbx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d99ecbcaf9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:39.902578 env[1327]: 2025-05-15 10:11:39.869 [INFO][5276] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" May 15 10:11:39.902578 env[1327]: 2025-05-15 10:11:39.869 [INFO][5276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" iface="eth0" netns="" May 15 10:11:39.902578 env[1327]: 2025-05-15 10:11:39.869 [INFO][5276] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" May 15 10:11:39.902578 env[1327]: 2025-05-15 10:11:39.869 [INFO][5276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" May 15 10:11:39.902578 env[1327]: 2025-05-15 10:11:39.888 [INFO][5284] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" HandleID="k8s-pod-network.d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" Workload="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" May 15 10:11:39.902578 env[1327]: 2025-05-15 10:11:39.889 [INFO][5284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:39.902578 env[1327]: 2025-05-15 10:11:39.889 [INFO][5284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:39.902578 env[1327]: 2025-05-15 10:11:39.897 [WARNING][5284] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" HandleID="k8s-pod-network.d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" Workload="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" May 15 10:11:39.902578 env[1327]: 2025-05-15 10:11:39.897 [INFO][5284] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" HandleID="k8s-pod-network.d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" Workload="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" May 15 10:11:39.902578 env[1327]: 2025-05-15 10:11:39.899 [INFO][5284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:39.902578 env[1327]: 2025-05-15 10:11:39.900 [INFO][5276] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" May 15 10:11:39.903077 env[1327]: time="2025-05-15T10:11:39.902610414Z" level=info msg="TearDown network for sandbox \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\" successfully" May 15 10:11:39.903077 env[1327]: time="2025-05-15T10:11:39.902641654Z" level=info msg="StopPodSandbox for \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\" returns successfully" May 15 10:11:39.903548 env[1327]: time="2025-05-15T10:11:39.903527691Z" level=info msg="RemovePodSandbox for \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\"" May 15 10:11:39.903609 env[1327]: time="2025-05-15T10:11:39.903556891Z" level=info msg="Forcibly stopping sandbox \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\"" May 15 10:11:39.974923 env[1327]: 2025-05-15 10:11:39.939 [WARNING][5306] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a56d6f69-05c6-49eb-910c-8dc8aa5ddf37", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d80263fa1ab5de49521aa1c7e51b6e17a2b2aeb8d62454a2995a6e73f64bb0f", Pod:"coredns-7db6d8ff4d-ddwbx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d99ecbcaf9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:39.974923 env[1327]: 2025-05-15 10:11:39.939 [INFO][5306] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" May 15 10:11:39.974923 env[1327]: 2025-05-15 10:11:39.939 [INFO][5306] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" iface="eth0" netns="" May 15 10:11:39.974923 env[1327]: 2025-05-15 10:11:39.939 [INFO][5306] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" May 15 10:11:39.974923 env[1327]: 2025-05-15 10:11:39.939 [INFO][5306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" May 15 10:11:39.974923 env[1327]: 2025-05-15 10:11:39.961 [INFO][5314] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" HandleID="k8s-pod-network.d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" Workload="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" May 15 10:11:39.974923 env[1327]: 2025-05-15 10:11:39.962 [INFO][5314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:39.974923 env[1327]: 2025-05-15 10:11:39.962 [INFO][5314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:39.974923 env[1327]: 2025-05-15 10:11:39.970 [WARNING][5314] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" HandleID="k8s-pod-network.d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" Workload="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" May 15 10:11:39.974923 env[1327]: 2025-05-15 10:11:39.970 [INFO][5314] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" HandleID="k8s-pod-network.d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" Workload="localhost-k8s-coredns--7db6d8ff4d--ddwbx-eth0" May 15 10:11:39.974923 env[1327]: 2025-05-15 10:11:39.971 [INFO][5314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:39.974923 env[1327]: 2025-05-15 10:11:39.973 [INFO][5306] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d" May 15 10:11:39.974923 env[1327]: time="2025-05-15T10:11:39.974891962Z" level=info msg="TearDown network for sandbox \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\" successfully" May 15 10:11:39.979760 env[1327]: time="2025-05-15T10:11:39.979711948Z" level=info msg="RemovePodSandbox \"d374bfeb8620676db439f09299d02f334a891812fb965efda684968046b8776d\" returns successfully" May 15 10:11:39.980376 env[1327]: time="2025-05-15T10:11:39.980349466Z" level=info msg="StopPodSandbox for \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\"" May 15 10:11:40.142347 env[1327]: 2025-05-15 10:11:40.060 [WARNING][5336] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0", GenerateName:"calico-kube-controllers-6777c65db9-", Namespace:"calico-system", SelfLink:"", UID:"12e389b9-6e26-4c9e-8f17-589ec81bbd99", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6777c65db9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b", Pod:"calico-kube-controllers-6777c65db9-lhgd2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali72082c87d9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:40.142347 env[1327]: 2025-05-15 10:11:40.060 [INFO][5336] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" May 15 10:11:40.142347 env[1327]: 2025-05-15 10:11:40.060 [INFO][5336] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" iface="eth0" netns="" May 15 10:11:40.142347 env[1327]: 2025-05-15 10:11:40.060 [INFO][5336] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" May 15 10:11:40.142347 env[1327]: 2025-05-15 10:11:40.060 [INFO][5336] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" May 15 10:11:40.142347 env[1327]: 2025-05-15 10:11:40.113 [INFO][5359] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" HandleID="k8s-pod-network.c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" Workload="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" May 15 10:11:40.142347 env[1327]: 2025-05-15 10:11:40.113 [INFO][5359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:40.142347 env[1327]: 2025-05-15 10:11:40.113 [INFO][5359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:40.142347 env[1327]: 2025-05-15 10:11:40.129 [WARNING][5359] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" HandleID="k8s-pod-network.c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" Workload="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" May 15 10:11:40.142347 env[1327]: 2025-05-15 10:11:40.129 [INFO][5359] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" HandleID="k8s-pod-network.c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" Workload="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" May 15 10:11:40.142347 env[1327]: 2025-05-15 10:11:40.130 [INFO][5359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:40.142347 env[1327]: 2025-05-15 10:11:40.133 [INFO][5336] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" May 15 10:11:40.142949 env[1327]: time="2025-05-15T10:11:40.142913801Z" level=info msg="TearDown network for sandbox \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\" successfully" May 15 10:11:40.143030 env[1327]: time="2025-05-15T10:11:40.143001201Z" level=info msg="StopPodSandbox for \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\" returns successfully" May 15 10:11:40.143597 env[1327]: time="2025-05-15T10:11:40.143525639Z" level=info msg="RemovePodSandbox for \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\"" May 15 10:11:40.143665 env[1327]: time="2025-05-15T10:11:40.143605519Z" level=info msg="Forcibly stopping sandbox \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\"" May 15 10:11:40.238104 env[1327]: 2025-05-15 10:11:40.185 [WARNING][5386] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0", GenerateName:"calico-kube-controllers-6777c65db9-", Namespace:"calico-system", SelfLink:"", UID:"12e389b9-6e26-4c9e-8f17-589ec81bbd99", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6777c65db9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e481904f6d715d6e2a4c3525cd2ed622d9e5e289937d97fabde056f0cf1c2b2b", Pod:"calico-kube-controllers-6777c65db9-lhgd2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali72082c87d9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:40.238104 env[1327]: 2025-05-15 10:11:40.185 [INFO][5386] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" May 15 10:11:40.238104 env[1327]: 2025-05-15 10:11:40.185 [INFO][5386] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" iface="eth0" netns="" May 15 10:11:40.238104 env[1327]: 2025-05-15 10:11:40.185 [INFO][5386] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" May 15 10:11:40.238104 env[1327]: 2025-05-15 10:11:40.185 [INFO][5386] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" May 15 10:11:40.238104 env[1327]: 2025-05-15 10:11:40.216 [INFO][5395] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" HandleID="k8s-pod-network.c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" Workload="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" May 15 10:11:40.238104 env[1327]: 2025-05-15 10:11:40.216 [INFO][5395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:40.238104 env[1327]: 2025-05-15 10:11:40.216 [INFO][5395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:40.238104 env[1327]: 2025-05-15 10:11:40.224 [WARNING][5395] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" HandleID="k8s-pod-network.c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" Workload="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" May 15 10:11:40.238104 env[1327]: 2025-05-15 10:11:40.224 [INFO][5395] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" HandleID="k8s-pod-network.c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" Workload="localhost-k8s-calico--kube--controllers--6777c65db9--lhgd2-eth0" May 15 10:11:40.238104 env[1327]: 2025-05-15 10:11:40.225 [INFO][5395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:40.238104 env[1327]: 2025-05-15 10:11:40.233 [INFO][5386] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2" May 15 10:11:40.238104 env[1327]: time="2025-05-15T10:11:40.238071610Z" level=info msg="TearDown network for sandbox \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\" successfully" May 15 10:11:40.240986 env[1327]: time="2025-05-15T10:11:40.240940361Z" level=info msg="RemovePodSandbox \"c1d95f0e92323a7d6ac066a33388195e6977604dc176d8fedb504507d6fe69e2\" returns successfully" May 15 10:11:40.241481 env[1327]: time="2025-05-15T10:11:40.241449400Z" level=info msg="StopPodSandbox for \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\"" May 15 10:11:40.326893 env[1327]: 2025-05-15 10:11:40.277 [WARNING][5418] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0", GenerateName:"calico-apiserver-6bff68f469-", Namespace:"calico-apiserver", SelfLink:"", UID:"304e1844-0899-4d12-8f60-1c590160ff7b", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bff68f469", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21", Pod:"calico-apiserver-6bff68f469-qmms6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f8738b3fa3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:40.326893 env[1327]: 2025-05-15 10:11:40.277 [INFO][5418] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" May 15 10:11:40.326893 env[1327]: 2025-05-15 10:11:40.278 [INFO][5418] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" iface="eth0" netns="" May 15 10:11:40.326893 env[1327]: 2025-05-15 10:11:40.278 [INFO][5418] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" May 15 10:11:40.326893 env[1327]: 2025-05-15 10:11:40.278 [INFO][5418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" May 15 10:11:40.326893 env[1327]: 2025-05-15 10:11:40.313 [INFO][5427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" HandleID="k8s-pod-network.42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" Workload="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" May 15 10:11:40.326893 env[1327]: 2025-05-15 10:11:40.313 [INFO][5427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:40.326893 env[1327]: 2025-05-15 10:11:40.313 [INFO][5427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:40.326893 env[1327]: 2025-05-15 10:11:40.321 [WARNING][5427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" HandleID="k8s-pod-network.42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" Workload="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" May 15 10:11:40.326893 env[1327]: 2025-05-15 10:11:40.321 [INFO][5427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" HandleID="k8s-pod-network.42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" Workload="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" May 15 10:11:40.326893 env[1327]: 2025-05-15 10:11:40.323 [INFO][5427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:40.326893 env[1327]: 2025-05-15 10:11:40.325 [INFO][5418] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" May 15 10:11:40.327613 env[1327]: time="2025-05-15T10:11:40.326926516Z" level=info msg="TearDown network for sandbox \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\" successfully" May 15 10:11:40.327613 env[1327]: time="2025-05-15T10:11:40.326957516Z" level=info msg="StopPodSandbox for \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\" returns successfully" May 15 10:11:40.327994 env[1327]: time="2025-05-15T10:11:40.327963793Z" level=info msg="RemovePodSandbox for \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\"" May 15 10:11:40.328243 env[1327]: time="2025-05-15T10:11:40.328183273Z" level=info msg="Forcibly stopping sandbox \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\"" May 15 10:11:40.412719 env[1327]: 2025-05-15 10:11:40.371 [WARNING][5450] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0", GenerateName:"calico-apiserver-6bff68f469-", Namespace:"calico-apiserver", SelfLink:"", UID:"304e1844-0899-4d12-8f60-1c590160ff7b", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bff68f469", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6b6f3b4911ec425a031676f8db95d4e78c18622dcdc588798bfa32dd0984e21", Pod:"calico-apiserver-6bff68f469-qmms6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f8738b3fa3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:40.412719 env[1327]: 2025-05-15 10:11:40.371 [INFO][5450] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" May 15 10:11:40.412719 env[1327]: 2025-05-15 10:11:40.371 [INFO][5450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" iface="eth0" netns="" May 15 10:11:40.412719 env[1327]: 2025-05-15 10:11:40.371 [INFO][5450] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" May 15 10:11:40.412719 env[1327]: 2025-05-15 10:11:40.371 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" May 15 10:11:40.412719 env[1327]: 2025-05-15 10:11:40.399 [INFO][5459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" HandleID="k8s-pod-network.42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" Workload="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" May 15 10:11:40.412719 env[1327]: 2025-05-15 10:11:40.399 [INFO][5459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:40.412719 env[1327]: 2025-05-15 10:11:40.399 [INFO][5459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:40.412719 env[1327]: 2025-05-15 10:11:40.407 [WARNING][5459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" HandleID="k8s-pod-network.42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" Workload="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" May 15 10:11:40.412719 env[1327]: 2025-05-15 10:11:40.407 [INFO][5459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" HandleID="k8s-pod-network.42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" Workload="localhost-k8s-calico--apiserver--6bff68f469--qmms6-eth0" May 15 10:11:40.412719 env[1327]: 2025-05-15 10:11:40.409 [INFO][5459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:40.412719 env[1327]: 2025-05-15 10:11:40.411 [INFO][5450] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b" May 15 10:11:40.413501 env[1327]: time="2025-05-15T10:11:40.413455789Z" level=info msg="TearDown network for sandbox \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\" successfully" May 15 10:11:40.419600 env[1327]: time="2025-05-15T10:11:40.419559972Z" level=info msg="RemovePodSandbox \"42120f112080505c29fd497c697f3ba1dd6b584a66636a2049bfc6be700be51b\" returns successfully" May 15 10:11:40.420173 env[1327]: time="2025-05-15T10:11:40.420148370Z" level=info msg="StopPodSandbox for \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\"" May 15 10:11:40.516462 env[1327]: 2025-05-15 10:11:40.467 [WARNING][5481] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0", GenerateName:"calico-apiserver-6bff68f469-", Namespace:"calico-apiserver", SelfLink:"", UID:"5adeff55-662e-40ab-bc79-10150d8d28e3", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bff68f469", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d", Pod:"calico-apiserver-6bff68f469-flc89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ecf8aecf70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:40.516462 env[1327]: 2025-05-15 10:11:40.467 [INFO][5481] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" May 15 10:11:40.516462 env[1327]: 2025-05-15 10:11:40.467 [INFO][5481] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" iface="eth0" netns="" May 15 10:11:40.516462 env[1327]: 2025-05-15 10:11:40.467 [INFO][5481] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" May 15 10:11:40.516462 env[1327]: 2025-05-15 10:11:40.467 [INFO][5481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" May 15 10:11:40.516462 env[1327]: 2025-05-15 10:11:40.502 [INFO][5490] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" HandleID="k8s-pod-network.2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" Workload="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" May 15 10:11:40.516462 env[1327]: 2025-05-15 10:11:40.502 [INFO][5490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:40.516462 env[1327]: 2025-05-15 10:11:40.502 [INFO][5490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:40.516462 env[1327]: 2025-05-15 10:11:40.511 [WARNING][5490] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" HandleID="k8s-pod-network.2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" Workload="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" May 15 10:11:40.516462 env[1327]: 2025-05-15 10:11:40.511 [INFO][5490] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" HandleID="k8s-pod-network.2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" Workload="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" May 15 10:11:40.516462 env[1327]: 2025-05-15 10:11:40.512 [INFO][5490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:40.516462 env[1327]: 2025-05-15 10:11:40.514 [INFO][5481] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" May 15 10:11:40.517022 env[1327]: time="2025-05-15T10:11:40.516982454Z" level=info msg="TearDown network for sandbox \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\" successfully" May 15 10:11:40.517091 env[1327]: time="2025-05-15T10:11:40.517074174Z" level=info msg="StopPodSandbox for \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\" returns successfully" May 15 10:11:40.517656 env[1327]: time="2025-05-15T10:11:40.517625012Z" level=info msg="RemovePodSandbox for \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\"" May 15 10:11:40.517721 env[1327]: time="2025-05-15T10:11:40.517663572Z" level=info msg="Forcibly stopping sandbox \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\"" May 15 10:11:40.595000 audit[5529]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=5529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:40.595000 audit[5529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffec9a1310 a2=0 a3=1 items=0 ppid=2368 pid=5529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:40.595000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:40.605000 audit[5529]: NETFILTER_CFG table=nat:114 family=2 entries=22 op=nft_register_rule pid=5529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:40.605000 audit[5529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffec9a1310 a2=0 a3=1 items=0 ppid=2368 pid=5529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:40.605000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:40.612807 sshd[5100]: pam_unix(sshd:session): session closed for user core May 15 10:11:40.613000 audit[5100]: USER_END pid=5100 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:40.613000 audit[5100]: CRED_DISP pid=5100 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:40.615408 systemd[1]: Started sshd@18-10.0.0.43:22-10.0.0.1:35886.service. May 15 10:11:40.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.43:22-10.0.0.1:35886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:40.616575 systemd[1]: sshd@17-10.0.0.43:22-10.0.0.1:35878.service: Deactivated successfully. May 15 10:11:40.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.43:22-10.0.0.1:35878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:40.618084 systemd-logind[1310]: Session 18 logged out. Waiting for processes to exit. May 15 10:11:40.618106 systemd[1]: session-18.scope: Deactivated successfully. May 15 10:11:40.620808 systemd-logind[1310]: Removed session 18. May 15 10:11:40.630000 audit[5535]: NETFILTER_CFG table=filter:115 family=2 entries=32 op=nft_register_rule pid=5535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:40.630000 audit[5535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffcb5877c0 a2=0 a3=1 items=0 ppid=2368 pid=5535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:40.630000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:40.641000 audit[5535]: NETFILTER_CFG table=nat:116 family=2 entries=22 op=nft_register_rule pid=5535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:40.641000 audit[5535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffcb5877c0 a2=0 a3=1 items=0 ppid=2368 pid=5535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:40.641000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:40.645283 env[1327]: 2025-05-15 10:11:40.560 [WARNING][5513] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0", GenerateName:"calico-apiserver-6bff68f469-", Namespace:"calico-apiserver", SelfLink:"", UID:"5adeff55-662e-40ab-bc79-10150d8d28e3", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 10, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bff68f469", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"039a43ed8e181681f47bad79aaca2fa3b7f2fa3d71559ac0561e778aee076b7d", Pod:"calico-apiserver-6bff68f469-flc89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ecf8aecf70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 10:11:40.645283 env[1327]: 2025-05-15 10:11:40.561 [INFO][5513] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" May 15 10:11:40.645283 env[1327]: 2025-05-15 10:11:40.561 [INFO][5513] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" iface="eth0" netns="" May 15 10:11:40.645283 env[1327]: 2025-05-15 10:11:40.561 [INFO][5513] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" May 15 10:11:40.645283 env[1327]: 2025-05-15 10:11:40.561 [INFO][5513] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" May 15 10:11:40.645283 env[1327]: 2025-05-15 10:11:40.594 [INFO][5521] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" HandleID="k8s-pod-network.2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" Workload="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" May 15 10:11:40.645283 env[1327]: 2025-05-15 10:11:40.594 [INFO][5521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 10:11:40.645283 env[1327]: 2025-05-15 10:11:40.594 [INFO][5521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 10:11:40.645283 env[1327]: 2025-05-15 10:11:40.611 [WARNING][5521] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" HandleID="k8s-pod-network.2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" Workload="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" May 15 10:11:40.645283 env[1327]: 2025-05-15 10:11:40.611 [INFO][5521] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" HandleID="k8s-pod-network.2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" Workload="localhost-k8s-calico--apiserver--6bff68f469--flc89-eth0" May 15 10:11:40.645283 env[1327]: 2025-05-15 10:11:40.632 [INFO][5521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 10:11:40.645283 env[1327]: 2025-05-15 10:11:40.635 [INFO][5513] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291" May 15 10:11:40.645731 env[1327]: time="2025-05-15T10:11:40.645690567Z" level=info msg="TearDown network for sandbox \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\" successfully" May 15 10:11:40.652598 env[1327]: time="2025-05-15T10:11:40.652556307Z" level=info msg="RemovePodSandbox \"2e89e6c32b42da1e65e647d83b889d0e03656fca679a01a97af0b5e8c5999291\" returns successfully" May 15 10:11:40.663000 audit[5531]: USER_ACCT pid=5531 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:40.665277 sshd[5531]: Accepted publickey for core from 10.0.0.1 port 35886 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:11:40.664000 audit[5531]: CRED_ACQ pid=5531 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:40.664000 audit[5531]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff3166f30 a2=3 a3=1 items=0 ppid=1 pid=5531 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:40.664000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:11:40.666396 sshd[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:11:40.670355 systemd-logind[1310]: New session 19 of user core. May 15 10:11:40.670791 systemd[1]: Started session-19.scope. May 15 10:11:40.674000 audit[5531]: USER_START pid=5531 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:40.675000 audit[5538]: CRED_ACQ pid=5538 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:40.937069 systemd[1]: Started sshd@19-10.0.0.43:22-10.0.0.1:35894.service. May 15 10:11:40.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.43:22-10.0.0.1:35894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:40.939409 sshd[5531]: pam_unix(sshd:session): session closed for user core May 15 10:11:40.939000 audit[5531]: USER_END pid=5531 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:40.939000 audit[5531]: CRED_DISP pid=5531 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:40.941873 systemd[1]: sshd@18-10.0.0.43:22-10.0.0.1:35886.service: Deactivated successfully. May 15 10:11:40.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.43:22-10.0.0.1:35886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:40.942945 systemd[1]: session-19.scope: Deactivated successfully. May 15 10:11:40.942965 systemd-logind[1310]: Session 19 logged out. Waiting for processes to exit. May 15 10:11:40.945806 systemd-logind[1310]: Removed session 19. May 15 10:11:40.973000 audit[5546]: USER_ACCT pid=5546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:40.975120 sshd[5546]: Accepted publickey for core from 10.0.0.1 port 35894 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:11:40.974000 audit[5546]: CRED_ACQ pid=5546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:40.974000 audit[5546]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd9caa1b0 a2=3 a3=1 items=0 ppid=1 pid=5546 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:40.974000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:11:40.976362 sshd[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:11:40.979917 systemd-logind[1310]: New session 20 of user core. May 15 10:11:40.980777 systemd[1]: Started session-20.scope. May 15 10:11:40.985000 audit[5546]: USER_START pid=5546 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:40.986000 audit[5551]: CRED_ACQ pid=5551 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:41.095500 sshd[5546]: pam_unix(sshd:session): session closed for user core May 15 10:11:41.095000 audit[5546]: USER_END pid=5546 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:41.095000 audit[5546]: CRED_DISP pid=5546 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:41.097914 systemd[1]: sshd@19-10.0.0.43:22-10.0.0.1:35894.service: Deactivated successfully. May 15 10:11:41.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.43:22-10.0.0.1:35894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:41.098854 systemd-logind[1310]: Session 20 logged out. Waiting for processes to exit. May 15 10:11:41.098919 systemd[1]: session-20.scope: Deactivated successfully. May 15 10:11:41.100115 systemd-logind[1310]: Removed session 20. May 15 10:11:45.786000 audit[5564]: NETFILTER_CFG table=filter:117 family=2 entries=20 op=nft_register_rule pid=5564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:45.788842 kernel: kauditd_printk_skb: 57 callbacks suppressed May 15 10:11:45.788890 kernel: audit: type=1325 audit(1747303905.786:526): table=filter:117 family=2 entries=20 op=nft_register_rule pid=5564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:45.786000 audit[5564]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffee536ef0 a2=0 a3=1 items=0 ppid=2368 pid=5564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:45.794542 kernel: audit: type=1300 audit(1747303905.786:526): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffee536ef0 a2=0 a3=1 items=0 ppid=2368 pid=5564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:45.794606 kernel: audit: type=1327 audit(1747303905.786:526): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:45.786000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:45.802000 audit[5564]: NETFILTER_CFG table=nat:118 family=2 entries=106 op=nft_register_chain pid=5564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:45.806232 kernel: audit: type=1325 audit(1747303905.802:527): table=nat:118 family=2 entries=106 op=nft_register_chain pid=5564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:45.806277 kernel: audit: type=1300 audit(1747303905.802:527): arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffee536ef0 a2=0 a3=1 items=0 ppid=2368 pid=5564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:45.802000 audit[5564]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffee536ef0 a2=0 a3=1 items=0 ppid=2368 pid=5564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:45.802000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:45.811972 kernel: audit: type=1327 audit(1747303905.802:527): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:46.098434 systemd[1]: Started sshd@20-10.0.0.43:22-10.0.0.1:46648.service. May 15 10:11:46.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.43:22-10.0.0.1:46648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:46.102265 kernel: audit: type=1130 audit(1747303906.097:528): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.43:22-10.0.0.1:46648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:46.142000 audit[5566]: USER_ACCT pid=5566 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:46.144379 sshd[5566]: Accepted publickey for core from 10.0.0.1 port 46648 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:11:46.145613 sshd[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:11:46.144000 audit[5566]: CRED_ACQ pid=5566 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:46.150575 kernel: audit: type=1101 audit(1747303906.142:529): pid=5566 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:46.150656 kernel: audit: type=1103 audit(1747303906.144:530): pid=5566 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:46.152848 kernel: audit: type=1006 audit(1747303906.144:531): pid=5566 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 May 15 10:11:46.144000 audit[5566]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffebf8d870 a2=3 a3=1 items=0 ppid=1 pid=5566 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:46.144000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:11:46.155512 systemd-logind[1310]: New session 21 of user core. May 15 10:11:46.156748 systemd[1]: Started session-21.scope. May 15 10:11:46.160000 audit[5566]: USER_START pid=5566 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:46.161000 audit[5569]: CRED_ACQ pid=5569 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:46.283410 sshd[5566]: pam_unix(sshd:session): session closed for user core May 15 10:11:46.283000 audit[5566]: USER_END pid=5566 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:46.284000 audit[5566]: CRED_DISP pid=5566 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:46.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.43:22-10.0.0.1:46648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:46.286944 systemd-logind[1310]: Session 21 logged out. Waiting for processes to exit. May 15 10:11:46.287180 systemd[1]: sshd@20-10.0.0.43:22-10.0.0.1:46648.service: Deactivated successfully. May 15 10:11:46.288270 systemd[1]: session-21.scope: Deactivated successfully. May 15 10:11:46.289000 systemd-logind[1310]: Removed session 21. May 15 10:11:49.849315 kubelet[2223]: I0515 10:11:49.849276 2223 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 10:11:49.903000 audit[5590]: NETFILTER_CFG table=filter:119 family=2 entries=8 op=nft_register_rule pid=5590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:49.903000 audit[5590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffe76868b0 a2=0 a3=1 items=0 ppid=2368 pid=5590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:49.903000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:49.911000 audit[5590]: NETFILTER_CFG table=nat:120 family=2 entries=58 op=nft_register_chain pid=5590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 10:11:49.911000 audit[5590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20452 a0=3 a1=ffffe76868b0 a2=0 a3=1 items=0 ppid=2368 pid=5590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:49.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 10:11:51.286491 systemd[1]: Started sshd@21-10.0.0.43:22-10.0.0.1:46656.service. May 15 10:11:51.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.43:22-10.0.0.1:46656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:51.287399 kernel: kauditd_printk_skb: 13 callbacks suppressed May 15 10:11:51.287445 kernel: audit: type=1130 audit(1747303911.285:539): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.43:22-10.0.0.1:46656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:51.323000 audit[5591]: USER_ACCT pid=5591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:51.324386 sshd[5591]: Accepted publickey for core from 10.0.0.1 port 46656 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:11:51.325452 sshd[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:11:51.323000 audit[5591]: CRED_ACQ pid=5591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:51.330610 kernel: audit: type=1101 audit(1747303911.323:540): pid=5591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:51.330653 kernel: audit: type=1103 audit(1747303911.323:541): pid=5591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:51.332983 kernel: audit: type=1006 audit(1747303911.323:542): pid=5591 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 May 15 10:11:51.323000 audit[5591]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe31eb9d0 a2=3 a3=1 items=0 ppid=1 pid=5591 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:51.335339 systemd[1]: Started session-22.scope. May 15 10:11:51.337088 kernel: audit: type=1300 audit(1747303911.323:542): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe31eb9d0 a2=3 a3=1 items=0 ppid=1 pid=5591 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:51.337165 systemd-logind[1310]: New session 22 of user core. May 15 10:11:51.323000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:11:51.338738 kernel: audit: type=1327 audit(1747303911.323:542): proctitle=737368643A20636F7265205B707269765D May 15 10:11:51.341000 audit[5591]: USER_START pid=5591 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:51.342000 audit[5594]: CRED_ACQ pid=5594 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:51.348863 kernel: audit: type=1105 audit(1747303911.341:543): pid=5591 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:51.348918 kernel: audit: type=1103 audit(1747303911.342:544): pid=5594 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:51.465054 sshd[5591]: pam_unix(sshd:session): session closed for user core May 15 10:11:51.464000 audit[5591]: USER_END pid=5591 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:51.467596 systemd[1]: sshd@21-10.0.0.43:22-10.0.0.1:46656.service: Deactivated successfully. May 15 10:11:51.468466 systemd[1]: session-22.scope: Deactivated successfully. May 15 10:11:51.464000 audit[5591]: CRED_DISP pid=5591 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:51.472784 kernel: audit: type=1106 audit(1747303911.464:545): pid=5591 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:51.472945 kernel: audit: type=1104 audit(1747303911.464:546): pid=5591 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:51.472913 systemd-logind[1310]: Session 22 logged out. Waiting for processes to exit. May 15 10:11:51.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.43:22-10.0.0.1:46656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:51.473805 systemd-logind[1310]: Removed session 22. May 15 10:11:54.371643 kubelet[2223]: E0515 10:11:54.371605 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:11:56.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.43:22-10.0.0.1:59086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:56.467915 systemd[1]: Started sshd@22-10.0.0.43:22-10.0.0.1:59086.service. May 15 10:11:56.471515 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 10:11:56.471598 kernel: audit: type=1130 audit(1747303916.466:548): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.43:22-10.0.0.1:59086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:56.504000 audit[5607]: USER_ACCT pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:56.510604 sshd[5607]: Accepted publickey for core from 10.0.0.1 port 59086 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:11:56.510923 sshd[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:11:56.523130 kernel: audit: type=1101 audit(1747303916.504:549): pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:56.523237 kernel: audit: type=1103 audit(1747303916.509:550): pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:56.523258 kernel: audit: type=1006 audit(1747303916.509:551): pid=5607 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 May 15 10:11:56.523273 kernel: audit: type=1300 audit(1747303916.509:551): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcf89e500 a2=3 a3=1 items=0 ppid=1 pid=5607 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:56.523293 kernel: audit: type=1327 audit(1747303916.509:551): proctitle=737368643A20636F7265205B707269765D May 15 10:11:56.509000 audit[5607]: CRED_ACQ pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:56.509000 audit[5607]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcf89e500 a2=3 a3=1 items=0 ppid=1 pid=5607 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:11:56.509000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:11:56.528624 systemd[1]: Started session-23.scope. May 15 10:11:56.528658 systemd-logind[1310]: New session 23 of user core. May 15 10:11:56.531000 audit[5607]: USER_START pid=5607 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:56.533000 audit[5610]: CRED_ACQ pid=5610 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:56.541354 kernel: audit: type=1105 audit(1747303916.531:552): pid=5607 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:56.541455 kernel: audit: type=1103 audit(1747303916.533:553): pid=5610 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:56.646411 sshd[5607]: pam_unix(sshd:session): session closed for user core May 15 10:11:56.646000 audit[5607]: USER_END pid=5607 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:56.648991 systemd[1]: sshd@22-10.0.0.43:22-10.0.0.1:59086.service: Deactivated successfully. May 15 10:11:56.649856 systemd[1]: session-23.scope: Deactivated successfully. May 15 10:11:56.646000 audit[5607]: CRED_DISP pid=5607 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:56.654600 kernel: audit: type=1106 audit(1747303916.646:554): pid=5607 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:56.654668 kernel: audit: type=1104 audit(1747303916.646:555): pid=5607 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:11:56.654669 systemd-logind[1310]: Session 23 logged out. Waiting for processes to exit. May 15 10:11:56.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.43:22-10.0.0.1:59086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:11:56.655719 systemd-logind[1310]: Removed session 23. May 15 10:12:01.371259 kubelet[2223]: E0515 10:12:01.371191 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:12:01.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.43:22-10.0.0.1:59094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:12:01.649754 systemd[1]: Started sshd@23-10.0.0.43:22-10.0.0.1:59094.service. May 15 10:12:01.650720 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 10:12:01.650769 kernel: audit: type=1130 audit(1747303921.648:557): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.43:22-10.0.0.1:59094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:12:01.686000 audit[5621]: USER_ACCT pid=5621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:12:01.688080 sshd[5621]: Accepted publickey for core from 10.0.0.1 port 59094 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:12:01.689361 sshd[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:12:01.687000 audit[5621]: CRED_ACQ pid=5621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:12:01.694567 kernel: audit: type=1101 audit(1747303921.686:558): pid=5621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:12:01.694631 kernel: audit: type=1103 audit(1747303921.687:559): pid=5621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:12:01.696682 kernel: audit: type=1006 audit(1747303921.687:560): pid=5621 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 May 15 10:12:01.687000 audit[5621]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffec73a890 a2=3 a3=1 items=0 ppid=1 pid=5621 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:12:01.700528 kernel: audit: type=1300 audit(1747303921.687:560): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffec73a890 a2=3 a3=1 items=0 ppid=1 pid=5621 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:12:01.687000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 10:12:01.701991 kernel: audit: type=1327 audit(1747303921.687:560): proctitle=737368643A20636F7265205B707269765D May 15 10:12:01.702534 systemd-logind[1310]: New session 24 of user core. May 15 10:12:01.703439 systemd[1]: Started session-24.scope. May 15 10:12:01.706000 audit[5621]: USER_START pid=5621 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:12:01.706000 audit[5624]: CRED_ACQ pid=5624 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:12:01.714147 kernel: audit: type=1105 audit(1747303921.706:561): pid=5621 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:12:01.714286 kernel: audit: type=1103 audit(1747303921.706:562): pid=5624 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:12:01.815539 sshd[5621]: pam_unix(sshd:session): session closed for user core May 15 10:12:01.815000 audit[5621]: USER_END pid=5621 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:12:01.818094 systemd[1]: sshd@23-10.0.0.43:22-10.0.0.1:59094.service: Deactivated successfully. May 15 10:12:01.818966 systemd[1]: session-24.scope: Deactivated successfully. May 15 10:12:01.815000 audit[5621]: CRED_DISP pid=5621 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:12:01.823535 kernel: audit: type=1106 audit(1747303921.815:563): pid=5621 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:12:01.823601 kernel: audit: type=1104 audit(1747303921.815:564): pid=5621 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 10:12:01.823593 systemd-logind[1310]: Session 24 logged out. Waiting for processes to exit. May 15 10:12:01.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.43:22-10.0.0.1:59094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:12:01.824443 systemd-logind[1310]: Removed session 24. May 15 10:12:03.349190 systemd[1]: run-containerd-runc-k8s.io-22f401d7f87f6dc7def10edbe4377ff4c82475ad8559ab64fcbe17ecd41d63e7-runc.QFZgrW.mount: Deactivated successfully.