May 16 00:42:41.737016 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 16 00:42:41.737039 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu May 15 23:21:39 -00 2025 May 16 00:42:41.737047 kernel: efi: EFI v2.70 by EDK II May 16 00:42:41.737053 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 16 00:42:41.737058 kernel: random: crng init done May 16 00:42:41.737064 kernel: ACPI: Early table checksum verification disabled May 16 00:42:41.737070 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 16 00:42:41.737077 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 16 00:42:41.737083 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:42:41.737088 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:42:41.737094 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:42:41.737099 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:42:41.737105 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:42:41.737110 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:42:41.737118 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:42:41.737125 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:42:41.737131 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:42:41.737136 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 16 00:42:41.737142 kernel: NUMA: Failed to initialise from firmware May 16 00:42:41.737148 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:42:41.737153 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] May 16 00:42:41.737159 kernel: Zone ranges: May 16 00:42:41.737165 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:42:41.737172 kernel: DMA32 empty May 16 00:42:41.737177 kernel: Normal empty May 16 00:42:41.737183 kernel: Movable zone start for each node May 16 00:42:41.737189 kernel: Early memory node ranges May 16 00:42:41.737194 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 16 00:42:41.737200 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 16 00:42:41.737206 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 16 00:42:41.737211 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 16 00:42:41.737217 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 16 00:42:41.737223 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 16 00:42:41.737229 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 16 00:42:41.737234 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:42:41.737241 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 16 00:42:41.737247 kernel: psci: probing for conduit method from ACPI. May 16 00:42:41.737253 kernel: psci: PSCIv1.1 detected in firmware. May 16 00:42:41.737258 kernel: psci: Using standard PSCI v0.2 function IDs May 16 00:42:41.737264 kernel: psci: Trusted OS migration not required May 16 00:42:41.737272 kernel: psci: SMC Calling Convention v1.1 May 16 00:42:41.737279 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 16 00:42:41.737286 kernel: ACPI: SRAT not present May 16 00:42:41.737292 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 16 00:42:41.737298 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 16 00:42:41.737305 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 16 00:42:41.737311 kernel: Detected PIPT I-cache on CPU0 May 16 00:42:41.737317 kernel: CPU features: detected: GIC system register CPU interface May 16 00:42:41.737323 kernel: CPU features: detected: Hardware dirty bit management May 16 00:42:41.737329 kernel: CPU features: detected: Spectre-v4 May 16 00:42:41.737336 kernel: CPU features: detected: Spectre-BHB May 16 00:42:41.737343 kernel: CPU features: kernel page table isolation forced ON by KASLR May 16 00:42:41.737349 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 16 00:42:41.737355 kernel: CPU features: detected: ARM erratum 1418040 May 16 00:42:41.737361 kernel: CPU features: detected: SSBS not fully self-synchronizing May 16 00:42:41.737367 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 16 00:42:41.737373 kernel: Policy zone: DMA May 16 00:42:41.737380 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2d88e96fdc9dc9b028836e57c250f3fd2abd3e6490e27ecbf72d8b216e3efce8 May 16 00:42:41.737387 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:42:41.737393 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 00:42:41.737399 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:42:41.737406 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:42:41.737413 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114948K reserved, 0K cma-reserved) May 16 00:42:41.737420 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 00:42:41.737426 kernel: trace event string verifier disabled May 16 00:42:41.737432 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:42:41.737439 kernel: rcu: RCU event tracing is enabled. May 16 00:42:41.737445 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 00:42:41.737451 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:42:41.737457 kernel: Tracing variant of Tasks RCU enabled. May 16 00:42:41.737463 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:42:41.737470 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 00:42:41.737476 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 16 00:42:41.737483 kernel: GICv3: 256 SPIs implemented May 16 00:42:41.737489 kernel: GICv3: 0 Extended SPIs implemented May 16 00:42:41.737495 kernel: GICv3: Distributor has no Range Selector support May 16 00:42:41.737501 kernel: Root IRQ handler: gic_handle_irq May 16 00:42:41.737507 kernel: GICv3: 16 PPIs implemented May 16 00:42:41.737514 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 16 00:42:41.737520 kernel: ACPI: SRAT not present May 16 00:42:41.737526 kernel: ITS [mem 0x08080000-0x0809ffff] May 16 00:42:41.737532 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 16 00:42:41.737538 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 16 00:42:41.737544 kernel: GICv3: using LPI property table @0x00000000400d0000 May 16 00:42:41.737551 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 16 00:42:41.737558 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:42:41.737565 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 16 00:42:41.737571 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 16 00:42:41.737577 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 16 00:42:41.737583 kernel: arm-pv: using stolen time PV May 16 00:42:41.737589 kernel: Console: colour dummy device 80x25 May 16 00:42:41.737596 kernel: ACPI: Core revision 20210730 May 16 00:42:41.737602 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 16 00:42:41.737609 kernel: pid_max: default: 32768 minimum: 301 May 16 00:42:41.737615 kernel: LSM: Security Framework initializing May 16 00:42:41.737623 kernel: SELinux: Initializing. May 16 00:42:41.737629 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:42:41.737635 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:42:41.737642 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 16 00:42:41.737648 kernel: rcu: Hierarchical SRCU implementation. May 16 00:42:41.737654 kernel: Platform MSI: ITS@0x8080000 domain created May 16 00:42:41.737660 kernel: PCI/MSI: ITS@0x8080000 domain created May 16 00:42:41.737667 kernel: Remapping and enabling EFI services. May 16 00:42:41.737673 kernel: smp: Bringing up secondary CPUs ... May 16 00:42:41.737680 kernel: Detected PIPT I-cache on CPU1 May 16 00:42:41.737687 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 16 00:42:41.737693 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 16 00:42:41.737699 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:42:41.737705 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 16 00:42:41.737712 kernel: Detected PIPT I-cache on CPU2 May 16 00:42:41.737718 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 16 00:42:41.737724 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 16 00:42:41.737731 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:42:41.737737 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 16 00:42:41.737744 kernel: Detected PIPT I-cache on CPU3 May 16 00:42:41.737751 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 16 00:42:41.737757 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 16 00:42:41.737764 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:42:41.737774 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 16 00:42:41.737782 kernel: smp: Brought up 1 node, 4 CPUs May 16 00:42:41.737788 kernel: SMP: Total of 4 processors activated. May 16 00:42:41.737795 kernel: CPU features: detected: 32-bit EL0 Support May 16 00:42:41.737801 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 16 00:42:41.737808 kernel: CPU features: detected: Common not Private translations May 16 00:42:41.737814 kernel: CPU features: detected: CRC32 instructions May 16 00:42:41.737821 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 16 00:42:41.737828 kernel: CPU features: detected: LSE atomic instructions May 16 00:42:41.737835 kernel: CPU features: detected: Privileged Access Never May 16 00:42:41.737842 kernel: CPU features: detected: RAS Extension Support May 16 00:42:41.737848 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 16 00:42:41.737855 kernel: CPU: All CPU(s) started at EL1 May 16 00:42:41.737863 kernel: alternatives: patching kernel code May 16 00:42:41.737869 kernel: devtmpfs: initialized May 16 00:42:41.737876 kernel: KASLR enabled May 16 00:42:41.737892 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:42:41.737899 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 00:42:41.737906 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:42:41.737912 kernel: SMBIOS 3.0.0 present. May 16 00:42:41.737919 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 16 00:42:41.737926 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:42:41.737934 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 16 00:42:41.737941 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 16 00:42:41.737948 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 16 00:42:41.737954 kernel: audit: initializing netlink subsys (disabled) May 16 00:42:41.737970 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 May 16 00:42:41.737976 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:42:41.737983 kernel: cpuidle: using governor menu May 16 00:42:41.737990 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 16 00:42:41.737997 kernel: ASID allocator initialised with 32768 entries May 16 00:42:41.738005 kernel: ACPI: bus type PCI registered May 16 00:42:41.738054 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:42:41.738062 kernel: Serial: AMBA PL011 UART driver May 16 00:42:41.738069 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:42:41.738075 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 16 00:42:41.738082 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:42:41.738088 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 16 00:42:41.738095 kernel: cryptd: max_cpu_qlen set to 1000 May 16 00:42:41.738101 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 16 00:42:41.738109 kernel: ACPI: Added _OSI(Module Device) May 16 00:42:41.738116 kernel: ACPI: Added _OSI(Processor Device) May 16 00:42:41.738123 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:42:41.738129 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:42:41.738136 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 16 00:42:41.738143 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 16 00:42:41.738149 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 16 00:42:41.738156 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 00:42:41.738162 kernel: ACPI: Interpreter enabled May 16 00:42:41.738170 kernel: ACPI: Using GIC for interrupt routing May 16 00:42:41.738177 kernel: ACPI: MCFG table detected, 1 entries May 16 00:42:41.738183 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 16 00:42:41.738190 kernel: printk: console [ttyAMA0] enabled May 16 00:42:41.738196 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 00:42:41.738384 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:42:41.738454 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 16 00:42:41.738517 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 16 00:42:41.738575 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 16 00:42:41.738632 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 16 00:42:41.738640 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 16 00:42:41.738647 kernel: PCI host bridge to bus 0000:00 May 16 00:42:41.738714 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 16 00:42:41.738767 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 16 00:42:41.738819 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 16 00:42:41.738875 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:42:41.739044 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 16 00:42:41.739124 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 16 00:42:41.739201 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 16 00:42:41.739261 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 16 00:42:41.739319 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 16 00:42:41.739381 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 16 00:42:41.739440 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 16 00:42:41.739506 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 16 00:42:41.739564 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 16 00:42:41.739615 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 16 00:42:41.739668 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 16 00:42:41.739676 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 16 00:42:41.739683 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 16 00:42:41.739692 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 16 00:42:41.739699 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 16 00:42:41.739705 kernel: iommu: Default domain type: Translated May 16 00:42:41.739712 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 16 00:42:41.739718 kernel: vgaarb: loaded May 16 00:42:41.739725 kernel: pps_core: LinuxPPS API ver. 1 registered May 16 00:42:41.739732 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 16 00:42:41.739738 kernel: PTP clock support registered May 16 00:42:41.739745 kernel: Registered efivars operations May 16 00:42:41.739753 kernel: clocksource: Switched to clocksource arch_sys_counter May 16 00:42:41.739759 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:42:41.739766 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:42:41.739772 kernel: pnp: PnP ACPI init May 16 00:42:41.739840 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 16 00:42:41.739850 kernel: pnp: PnP ACPI: found 1 devices May 16 00:42:41.739857 kernel: NET: Registered PF_INET protocol family May 16 00:42:41.739863 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 00:42:41.739872 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 00:42:41.739887 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:42:41.739894 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 00:42:41.739901 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 16 00:42:41.739908 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 00:42:41.739914 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:42:41.739921 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:42:41.739928 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:42:41.739935 kernel: PCI: CLS 0 bytes, default 64 May 16 00:42:41.739942 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 16 00:42:41.739949 kernel: kvm [1]: HYP mode not available May 16 00:42:41.739956 kernel: Initialise system trusted keyrings May 16 00:42:41.739976 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 00:42:41.739983 kernel: Key type asymmetric registered May 16 00:42:41.743668 kernel: Asymmetric key parser 'x509' registered May 16 00:42:41.743678 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 16 00:42:41.743686 kernel: io scheduler mq-deadline registered May 16 00:42:41.743693 kernel: io scheduler kyber registered May 16 00:42:41.743707 kernel: io scheduler bfq registered May 16 00:42:41.743714 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 16 00:42:41.743721 kernel: ACPI: button: Power Button [PWRB] May 16 00:42:41.743728 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 16 00:42:41.743856 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 16 00:42:41.743867 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:42:41.743875 kernel: thunder_xcv, ver 1.0 May 16 00:42:41.743895 kernel: thunder_bgx, ver 1.0 May 16 00:42:41.743902 kernel: nicpf, ver 1.0 May 16 00:42:41.743912 kernel: nicvf, ver 1.0 May 16 00:42:41.744085 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 16 00:42:41.744151 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-16T00:42:41 UTC (1747356161) May 16 00:42:41.744161 kernel: hid: raw HID events driver (C) Jiri Kosina May 16 00:42:41.744168 kernel: NET: Registered PF_INET6 protocol family May 16 00:42:41.744175 kernel: Segment Routing with IPv6 May 16 00:42:41.744181 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:42:41.744188 kernel: NET: Registered PF_PACKET protocol family May 16 00:42:41.744199 kernel: Key type dns_resolver registered May 16 00:42:41.744205 kernel: registered taskstats version 1 May 16 00:42:41.744213 kernel: Loading compiled-in X.509 certificates May 16 00:42:41.744220 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 2793d535c1de6f1789b22ef06bd5666144f4eeb2' May 16 00:42:41.744226 kernel: Key type .fscrypt registered May 16 00:42:41.744233 kernel: Key type fscrypt-provisioning registered May 16 00:42:41.744240 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:42:41.744247 kernel: ima: Allocated hash algorithm: sha1 May 16 00:42:41.744254 kernel: ima: No architecture policies found May 16 00:42:41.744262 kernel: clk: Disabling unused clocks May 16 00:42:41.744269 kernel: Freeing unused kernel memory: 36480K May 16 00:42:41.744276 kernel: Run /init as init process May 16 00:42:41.744283 kernel: with arguments: May 16 00:42:41.744290 kernel: /init May 16 00:42:41.744297 kernel: with environment: May 16 00:42:41.744304 kernel: HOME=/ May 16 00:42:41.744311 kernel: TERM=linux May 16 00:42:41.744318 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:42:41.744328 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 16 00:42:41.744337 systemd[1]: Detected virtualization kvm. May 16 00:42:41.744345 systemd[1]: Detected architecture arm64. May 16 00:42:41.744352 systemd[1]: Running in initrd. May 16 00:42:41.744359 systemd[1]: No hostname configured, using default hostname. May 16 00:42:41.744367 systemd[1]: Hostname set to . May 16 00:42:41.744375 systemd[1]: Initializing machine ID from VM UUID. May 16 00:42:41.744383 systemd[1]: Queued start job for default target initrd.target. May 16 00:42:41.744391 systemd[1]: Started systemd-ask-password-console.path. May 16 00:42:41.744398 systemd[1]: Reached target cryptsetup.target. May 16 00:42:41.744404 systemd[1]: Reached target paths.target. May 16 00:42:41.744411 systemd[1]: Reached target slices.target. May 16 00:42:41.744419 systemd[1]: Reached target swap.target. May 16 00:42:41.744426 systemd[1]: Reached target timers.target. May 16 00:42:41.744433 systemd[1]: Listening on iscsid.socket. May 16 00:42:41.744442 systemd[1]: Listening on iscsiuio.socket. May 16 00:42:41.744449 systemd[1]: Listening on systemd-journald-audit.socket. May 16 00:42:41.744457 systemd[1]: Listening on systemd-journald-dev-log.socket. May 16 00:42:41.744464 systemd[1]: Listening on systemd-journald.socket. May 16 00:42:41.744472 systemd[1]: Listening on systemd-networkd.socket. May 16 00:42:41.744479 systemd[1]: Listening on systemd-udevd-control.socket. May 16 00:42:41.744486 systemd[1]: Listening on systemd-udevd-kernel.socket. May 16 00:42:41.744493 systemd[1]: Reached target sockets.target. May 16 00:42:41.744502 systemd[1]: Starting kmod-static-nodes.service... May 16 00:42:41.744509 systemd[1]: Finished network-cleanup.service. May 16 00:42:41.744517 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:42:41.744524 systemd[1]: Starting systemd-journald.service... May 16 00:42:41.744531 systemd[1]: Starting systemd-modules-load.service... May 16 00:42:41.744539 systemd[1]: Starting systemd-resolved.service... May 16 00:42:41.744546 systemd[1]: Starting systemd-vconsole-setup.service... May 16 00:42:41.744553 systemd[1]: Finished kmod-static-nodes.service. May 16 00:42:41.744560 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:42:41.744569 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 16 00:42:41.744576 systemd[1]: Finished systemd-vconsole-setup.service. May 16 00:42:41.744583 systemd[1]: Starting dracut-cmdline-ask.service... May 16 00:42:41.744590 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 16 00:42:41.744598 kernel: audit: type=1130 audit(1747356161.739:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:41.744610 systemd-journald[289]: Journal started May 16 00:42:41.744657 systemd-journald[289]: Runtime Journal (/run/log/journal/a7ccc5166cbf486084674c2b037231e1) is 6.0M, max 48.7M, 42.6M free. May 16 00:42:41.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:41.730222 systemd-modules-load[290]: Inserted module 'overlay' May 16 00:42:41.742268 systemd-resolved[291]: Positive Trust Anchors: May 16 00:42:41.749121 systemd[1]: Started systemd-journald.service. May 16 00:42:41.749141 kernel: audit: type=1130 audit(1747356161.745:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:41.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:41.742275 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:42:41.742303 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 16 00:42:41.762072 kernel: audit: type=1130 audit(1747356161.752:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:41.762096 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:42:41.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:41.747184 systemd-resolved[291]: Defaulting to hostname 'linux'. May 16 00:42:41.749668 systemd[1]: Started systemd-resolved.service. May 16 00:42:41.755243 systemd[1]: Reached target nss-lookup.target. May 16 00:42:41.766689 systemd-modules-load[290]: Inserted module 'br_netfilter' May 16 00:42:41.767490 kernel: Bridge firewalling registered May 16 00:42:41.767693 systemd[1]: Finished dracut-cmdline-ask.service. May 16 00:42:41.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:41.769403 systemd[1]: Starting dracut-cmdline.service... May 16 00:42:41.771487 kernel: audit: type=1130 audit(1747356161.767:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:41.779426 dracut-cmdline[308]: dracut-dracut-053 May 16 00:42:41.782170 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2d88e96fdc9dc9b028836e57c250f3fd2abd3e6490e27ecbf72d8b216e3efce8 May 16 00:42:41.787024 kernel: SCSI subsystem initialized May 16 00:42:41.798557 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:42:41.798619 kernel: device-mapper: uevent: version 1.0.3 May 16 00:42:41.798631 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 16 00:42:41.803651 systemd-modules-load[290]: Inserted module 'dm_multipath' May 16 00:42:41.808081 kernel: audit: type=1130 audit(1747356161.804:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:41.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:41.804856 systemd[1]: Finished systemd-modules-load.service. May 16 00:42:41.806727 systemd[1]: Starting systemd-sysctl.service... May 16 00:42:41.818902 systemd[1]: Finished systemd-sysctl.service. May 16 00:42:41.824028 kernel: audit: type=1130 audit(1747356161.818:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:41.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:41.861993 kernel: Loading iSCSI transport class v2.0-870. May 16 00:42:41.873988 kernel: iscsi: registered transport (tcp) May 16 00:42:41.890996 kernel: iscsi: registered transport (qla4xxx) May 16 00:42:41.891059 kernel: QLogic iSCSI HBA Driver May 16 00:42:41.924777 systemd[1]: Finished dracut-cmdline.service. May 16 00:42:41.928057 kernel: audit: type=1130 audit(1747356161.924:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:41.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:41.926610 systemd[1]: Starting dracut-pre-udev.service... May 16 00:42:41.970986 kernel: raid6: neonx8 gen() 13668 MB/s May 16 00:42:41.987976 kernel: raid6: neonx8 xor() 10732 MB/s May 16 00:42:42.004976 kernel: raid6: neonx4 gen() 13473 MB/s May 16 00:42:42.021976 kernel: raid6: neonx4 xor() 11118 MB/s May 16 00:42:42.038974 kernel: raid6: neonx2 gen() 12947 MB/s May 16 00:42:42.055982 kernel: raid6: neonx2 xor() 10515 MB/s May 16 00:42:42.072996 kernel: raid6: neonx1 gen() 10558 MB/s May 16 00:42:42.090005 kernel: raid6: neonx1 xor() 8715 MB/s May 16 00:42:42.106989 kernel: raid6: int64x8 gen() 6256 MB/s May 16 00:42:42.123988 kernel: raid6: int64x8 xor() 3541 MB/s May 16 00:42:42.140992 kernel: raid6: int64x4 gen() 7211 MB/s May 16 00:42:42.158006 kernel: raid6: int64x4 xor() 3848 MB/s May 16 00:42:42.174986 kernel: raid6: int64x2 gen() 6145 MB/s May 16 00:42:42.192002 kernel: raid6: int64x2 xor() 3318 MB/s May 16 00:42:42.208984 kernel: raid6: int64x1 gen() 5037 MB/s May 16 00:42:42.226247 kernel: raid6: int64x1 xor() 2644 MB/s May 16 00:42:42.226281 kernel: raid6: using algorithm neonx8 gen() 13668 MB/s May 16 00:42:42.226291 kernel: raid6: .... xor() 10732 MB/s, rmw enabled May 16 00:42:42.226310 kernel: raid6: using neon recovery algorithm May 16 00:42:42.238984 kernel: xor: measuring software checksum speed May 16 00:42:42.239993 kernel: 8regs : 15746 MB/sec May 16 00:42:42.240008 kernel: 32regs : 20717 MB/sec May 16 00:42:42.241008 kernel: arm64_neon : 27757 MB/sec May 16 00:42:42.241023 kernel: xor: using function: arm64_neon (27757 MB/sec) May 16 00:42:42.295995 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 16 00:42:42.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:42.306806 systemd[1]: Finished dracut-pre-udev.service. May 16 00:42:42.309981 kernel: audit: type=1130 audit(1747356162.306:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:42.309000 audit: BPF prog-id=7 op=LOAD May 16 00:42:42.311000 audit: BPF prog-id=8 op=LOAD May 16 00:42:42.312302 kernel: audit: type=1334 audit(1747356162.309:10): prog-id=7 op=LOAD May 16 00:42:42.311655 systemd[1]: Starting systemd-udevd.service... May 16 00:42:42.325432 systemd-udevd[491]: Using default interface naming scheme 'v252'. May 16 00:42:42.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:42.329109 systemd[1]: Started systemd-udevd.service. May 16 00:42:42.330604 systemd[1]: Starting dracut-pre-trigger.service... May 16 00:42:42.349638 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation May 16 00:42:42.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:42.383734 systemd[1]: Finished dracut-pre-trigger.service. May 16 00:42:42.386380 systemd[1]: Starting systemd-udev-trigger.service... May 16 00:42:42.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:42.421518 systemd[1]: Finished systemd-udev-trigger.service. May 16 00:42:42.461967 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 00:42:42.466011 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:42:42.466037 kernel: GPT:9289727 != 19775487 May 16 00:42:42.466046 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:42:42.466055 kernel: GPT:9289727 != 19775487 May 16 00:42:42.466064 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:42:42.466073 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:42:42.485533 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 16 00:42:42.486451 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 16 00:42:42.490886 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 16 00:42:42.495589 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (546) May 16 00:42:42.498670 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 16 00:42:42.505657 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 16 00:42:42.507619 systemd[1]: Starting disk-uuid.service... May 16 00:42:42.513694 disk-uuid[567]: Primary Header is updated. May 16 00:42:42.513694 disk-uuid[567]: Secondary Entries is updated. May 16 00:42:42.513694 disk-uuid[567]: Secondary Header is updated. May 16 00:42:42.516986 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:42:43.532519 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:42:43.532572 disk-uuid[568]: The operation has completed successfully. May 16 00:42:43.555557 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:42:43.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:43.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:43.555650 systemd[1]: Finished disk-uuid.service. May 16 00:42:43.557308 systemd[1]: Starting verity-setup.service... May 16 00:42:43.577992 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 16 00:42:43.598863 systemd[1]: Found device dev-mapper-usr.device. May 16 00:42:43.601100 systemd[1]: Mounting sysusr-usr.mount... May 16 00:42:43.603624 systemd[1]: Finished verity-setup.service. May 16 00:42:43.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:43.654997 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 16 00:42:43.655173 systemd[1]: Mounted sysusr-usr.mount. May 16 00:42:43.655808 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 16 00:42:43.656563 systemd[1]: Starting ignition-setup.service... May 16 00:42:43.658761 systemd[1]: Starting parse-ip-for-networkd.service... May 16 00:42:43.665115 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 00:42:43.665146 kernel: BTRFS info (device vda6): using free space tree May 16 00:42:43.665157 kernel: BTRFS info (device vda6): has skinny extents May 16 00:42:43.673004 systemd[1]: mnt-oem.mount: Deactivated successfully. May 16 00:42:43.729659 systemd[1]: Finished parse-ip-for-networkd.service. May 16 00:42:43.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:43.730000 audit: BPF prog-id=9 op=LOAD May 16 00:42:43.731721 systemd[1]: Starting systemd-networkd.service... May 16 00:42:43.749807 systemd-networkd[737]: lo: Link UP May 16 00:42:43.749820 systemd-networkd[737]: lo: Gained carrier May 16 00:42:43.750203 systemd-networkd[737]: Enumeration completed May 16 00:42:43.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:43.750272 systemd[1]: Started systemd-networkd.service. May 16 00:42:43.750378 systemd-networkd[737]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:42:43.751416 systemd[1]: Reached target network.target. May 16 00:42:43.753194 systemd-networkd[737]: eth0: Link UP May 16 00:42:43.753197 systemd-networkd[737]: eth0: Gained carrier May 16 00:42:43.753289 systemd[1]: Starting iscsiuio.service... May 16 00:42:43.764186 systemd[1]: Started iscsiuio.service. May 16 00:42:43.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:43.765634 systemd[1]: Starting iscsid.service... May 16 00:42:43.768887 iscsid[742]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 16 00:42:43.768887 iscsid[742]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 16 00:42:43.768887 iscsid[742]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 16 00:42:43.768887 iscsid[742]: If using hardware iscsi like qla4xxx this message can be ignored. May 16 00:42:43.768887 iscsid[742]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 16 00:42:43.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:43.776781 iscsid[742]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 16 00:42:43.771703 systemd[1]: Started iscsid.service. May 16 00:42:43.775213 systemd[1]: Starting dracut-initqueue.service... May 16 00:42:43.776044 systemd-networkd[737]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:42:43.783974 systemd[1]: Finished ignition-setup.service. May 16 00:42:43.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:43.785390 systemd[1]: Starting ignition-fetch-offline.service... May 16 00:42:43.787831 systemd[1]: Finished dracut-initqueue.service. May 16 00:42:43.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:43.788606 systemd[1]: Reached target remote-fs-pre.target. May 16 00:42:43.789701 systemd[1]: Reached target remote-cryptsetup.target. May 16 00:42:43.791159 systemd[1]: Reached target remote-fs.target. May 16 00:42:43.793338 systemd[1]: Starting dracut-pre-mount.service... May 16 00:42:43.801643 systemd[1]: Finished dracut-pre-mount.service. May 16 00:42:43.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:43.868405 ignition[752]: Ignition 2.14.0 May 16 00:42:43.868416 ignition[752]: Stage: fetch-offline May 16 00:42:43.868456 ignition[752]: no configs at "/usr/lib/ignition/base.d" May 16 00:42:43.868465 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:42:43.868595 ignition[752]: parsed url from cmdline: "" May 16 00:42:43.868598 ignition[752]: no config URL provided May 16 00:42:43.868603 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:42:43.868610 ignition[752]: no config at "/usr/lib/ignition/user.ign" May 16 00:42:43.868629 ignition[752]: op(1): [started] loading QEMU firmware config module May 16 00:42:43.868633 ignition[752]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 00:42:43.872198 ignition[752]: op(1): [finished] loading QEMU firmware config module May 16 00:42:43.872220 ignition[752]: QEMU firmware config was not found. Ignoring... May 16 00:42:43.915910 ignition[752]: parsing config with SHA512: c454db66c9e9810630ca34f79572128ae791277cd834a02c8a9ad59d1fdb6901e89d63249642360d46fa5171b72671003fd553d55da9d1edb72d332e05e76657 May 16 00:42:43.923568 unknown[752]: fetched base config from "system" May 16 00:42:43.923583 unknown[752]: fetched user config from "qemu" May 16 00:42:43.924178 ignition[752]: fetch-offline: fetch-offline passed May 16 00:42:43.924239 ignition[752]: Ignition finished successfully May 16 00:42:43.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:43.926751 systemd[1]: Finished ignition-fetch-offline.service. May 16 00:42:43.927762 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 00:42:43.928577 systemd[1]: Starting ignition-kargs.service... May 16 00:42:43.938246 ignition[765]: Ignition 2.14.0 May 16 00:42:43.938256 ignition[765]: Stage: kargs May 16 00:42:43.938349 ignition[765]: no configs at "/usr/lib/ignition/base.d" May 16 00:42:43.938359 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:42:43.939287 ignition[765]: kargs: kargs passed May 16 00:42:43.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:43.941264 systemd[1]: Finished ignition-kargs.service. May 16 00:42:43.939330 ignition[765]: Ignition finished successfully May 16 00:42:43.943774 systemd[1]: Starting ignition-disks.service... May 16 00:42:43.950465 ignition[771]: Ignition 2.14.0 May 16 00:42:43.950474 ignition[771]: Stage: disks May 16 00:42:43.950565 ignition[771]: no configs at "/usr/lib/ignition/base.d" May 16 00:42:43.953134 systemd[1]: Finished ignition-disks.service. May 16 00:42:43.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:43.950575 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:42:43.954057 systemd[1]: Reached target initrd-root-device.target. May 16 00:42:43.951510 ignition[771]: disks: disks passed May 16 00:42:43.955331 systemd[1]: Reached target local-fs-pre.target. May 16 00:42:43.951554 ignition[771]: Ignition finished successfully May 16 00:42:43.956874 systemd[1]: Reached target local-fs.target. May 16 00:42:43.958256 systemd[1]: Reached target sysinit.target. May 16 00:42:43.959412 systemd[1]: Reached target basic.target. May 16 00:42:43.961561 systemd[1]: Starting systemd-fsck-root.service... May 16 00:42:43.972013 systemd-fsck[779]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 16 00:42:44.034243 systemd[1]: Finished systemd-fsck-root.service. May 16 00:42:44.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:44.036578 systemd[1]: Mounting sysroot.mount... May 16 00:42:44.044739 systemd[1]: Mounted sysroot.mount. May 16 00:42:44.045919 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 16 00:42:44.045462 systemd[1]: Reached target initrd-root-fs.target. May 16 00:42:44.047682 systemd[1]: Mounting sysroot-usr.mount... May 16 00:42:44.048578 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 16 00:42:44.048620 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:42:44.048644 systemd[1]: Reached target ignition-diskful.target. May 16 00:42:44.050543 systemd[1]: Mounted sysroot-usr.mount. May 16 00:42:44.052202 systemd[1]: Starting initrd-setup-root.service... May 16 00:42:44.056792 initrd-setup-root[789]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:42:44.060427 initrd-setup-root[797]: cut: /sysroot/etc/group: No such file or directory May 16 00:42:44.064093 initrd-setup-root[805]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:42:44.068082 initrd-setup-root[813]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:42:44.094246 systemd[1]: Finished initrd-setup-root.service. May 16 00:42:44.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:44.095630 systemd[1]: Starting ignition-mount.service... May 16 00:42:44.096782 systemd[1]: Starting sysroot-boot.service... May 16 00:42:44.101179 bash[830]: umount: /sysroot/usr/share/oem: not mounted. May 16 00:42:44.109455 ignition[832]: INFO : Ignition 2.14.0 May 16 00:42:44.110504 ignition[832]: INFO : Stage: mount May 16 00:42:44.110504 ignition[832]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:42:44.110504 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:42:44.113326 ignition[832]: INFO : mount: mount passed May 16 00:42:44.113326 ignition[832]: INFO : Ignition finished successfully May 16 00:42:44.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:44.112250 systemd[1]: Finished ignition-mount.service. May 16 00:42:44.118417 systemd[1]: Finished sysroot-boot.service. May 16 00:42:44.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:44.614090 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 16 00:42:44.620075 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (841) May 16 00:42:44.620115 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 00:42:44.621262 kernel: BTRFS info (device vda6): using free space tree May 16 00:42:44.621275 kernel: BTRFS info (device vda6): has skinny extents May 16 00:42:44.624065 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 16 00:42:44.625581 systemd[1]: Starting ignition-files.service... May 16 00:42:44.638651 ignition[861]: INFO : Ignition 2.14.0 May 16 00:42:44.638651 ignition[861]: INFO : Stage: files May 16 00:42:44.640131 ignition[861]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:42:44.640131 ignition[861]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:42:44.640131 ignition[861]: DEBUG : files: compiled without relabeling support, skipping May 16 00:42:44.642609 ignition[861]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:42:44.642609 ignition[861]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:42:44.645743 ignition[861]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:42:44.646864 ignition[861]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:42:44.648171 unknown[861]: wrote ssh authorized keys file for user: core May 16 00:42:44.649253 ignition[861]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:42:44.649253 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 16 00:42:44.649253 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 16 00:42:44.649253 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 16 00:42:44.649253 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 16 00:42:44.740220 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 00:42:44.924834 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 16 00:42:44.924834 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 00:42:44.927762 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:42:44.927762 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 00:42:44.927762 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 00:42:44.927762 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:42:44.927762 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:42:44.927762 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:42:44.927762 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:42:44.927762 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:42:44.927762 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:42:44.927762 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 00:42:44.927762 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 00:42:44.927762 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 00:42:44.927762 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 16 00:42:45.705091 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 00:42:45.740221 systemd-networkd[737]: eth0: Gained IPv6LL May 16 00:42:47.777715 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 00:42:47.777715 ignition[861]: INFO : files: op(c): [started] processing unit "containerd.service" May 16 00:42:47.780745 ignition[861]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 16 00:42:47.780745 ignition[861]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 16 00:42:47.780745 ignition[861]: INFO : files: op(c): [finished] processing unit "containerd.service" May 16 00:42:47.780745 ignition[861]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 16 00:42:47.780745 ignition[861]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:42:47.780745 ignition[861]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:42:47.780745 ignition[861]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 16 00:42:47.780745 ignition[861]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 16 00:42:47.780745 ignition[861]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:42:47.780745 ignition[861]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:42:47.780745 ignition[861]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 16 00:42:47.780745 ignition[861]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 16 00:42:47.780745 ignition[861]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 16 00:42:47.780745 ignition[861]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 16 00:42:47.780745 ignition[861]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:42:47.818575 ignition[861]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:42:47.820560 ignition[861]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 16 00:42:47.820560 ignition[861]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:42:47.820560 ignition[861]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:42:47.820560 ignition[861]: INFO : files: files passed May 16 00:42:47.820560 ignition[861]: INFO : Ignition finished successfully May 16 00:42:47.829841 kernel: kauditd_printk_skb: 22 callbacks suppressed May 16 00:42:47.829866 kernel: audit: type=1130 audit(1747356167.821:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.820679 systemd[1]: Finished ignition-files.service. May 16 00:42:47.823205 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 16 00:42:47.835480 kernel: audit: type=1130 audit(1747356167.831:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.835498 kernel: audit: type=1131 audit(1747356167.831:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.826923 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 16 00:42:47.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.839277 initrd-setup-root-after-ignition[886]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 16 00:42:47.840719 kernel: audit: type=1130 audit(1747356167.836:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.827649 systemd[1]: Starting ignition-quench.service... May 16 00:42:47.841796 initrd-setup-root-after-ignition[889]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:42:47.830612 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:42:47.830699 systemd[1]: Finished ignition-quench.service. May 16 00:42:47.833214 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 16 00:42:47.836255 systemd[1]: Reached target ignition-complete.target. May 16 00:42:47.840439 systemd[1]: Starting initrd-parse-etc.service... May 16 00:42:47.853044 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:42:47.853136 systemd[1]: Finished initrd-parse-etc.service. May 16 00:42:47.858293 kernel: audit: type=1130 audit(1747356167.854:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.858312 kernel: audit: type=1131 audit(1747356167.854:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.854499 systemd[1]: Reached target initrd-fs.target. May 16 00:42:47.858956 systemd[1]: Reached target initrd.target. May 16 00:42:47.859974 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 16 00:42:47.860733 systemd[1]: Starting dracut-pre-pivot.service... May 16 00:42:47.871166 systemd[1]: Finished dracut-pre-pivot.service. May 16 00:42:47.874025 kernel: audit: type=1130 audit(1747356167.870:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.872478 systemd[1]: Starting initrd-cleanup.service... May 16 00:42:47.880090 systemd[1]: Stopped target nss-lookup.target. May 16 00:42:47.880740 systemd[1]: Stopped target remote-cryptsetup.target. May 16 00:42:47.881777 systemd[1]: Stopped target timers.target. May 16 00:42:47.882764 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:42:47.885997 kernel: audit: type=1131 audit(1747356167.883:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.882861 systemd[1]: Stopped dracut-pre-pivot.service. May 16 00:42:47.883811 systemd[1]: Stopped target initrd.target. May 16 00:42:47.886580 systemd[1]: Stopped target basic.target. May 16 00:42:47.887542 systemd[1]: Stopped target ignition-complete.target. May 16 00:42:47.888608 systemd[1]: Stopped target ignition-diskful.target. May 16 00:42:47.889619 systemd[1]: Stopped target initrd-root-device.target. May 16 00:42:47.890686 systemd[1]: Stopped target remote-fs.target. May 16 00:42:47.892211 systemd[1]: Stopped target remote-fs-pre.target. May 16 00:42:47.893271 systemd[1]: Stopped target sysinit.target. May 16 00:42:47.894202 systemd[1]: Stopped target local-fs.target. May 16 00:42:47.895179 systemd[1]: Stopped target local-fs-pre.target. May 16 00:42:47.896364 systemd[1]: Stopped target swap.target. May 16 00:42:47.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.897307 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:42:47.901734 kernel: audit: type=1131 audit(1747356167.898:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.897417 systemd[1]: Stopped dracut-pre-mount.service. May 16 00:42:47.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.898454 systemd[1]: Stopped target cryptsetup.target. May 16 00:42:47.905428 kernel: audit: type=1131 audit(1747356167.902:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.901026 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:42:47.901130 systemd[1]: Stopped dracut-initqueue.service. May 16 00:42:47.902380 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:42:47.902475 systemd[1]: Stopped ignition-fetch-offline.service. May 16 00:42:47.905100 systemd[1]: Stopped target paths.target. May 16 00:42:47.905940 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:42:47.910005 systemd[1]: Stopped systemd-ask-password-console.path. May 16 00:42:47.910727 systemd[1]: Stopped target slices.target. May 16 00:42:47.911722 systemd[1]: Stopped target sockets.target. May 16 00:42:47.912656 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:42:47.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.912774 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 16 00:42:47.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.913749 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:42:47.913838 systemd[1]: Stopped ignition-files.service. May 16 00:42:47.917260 iscsid[742]: iscsid shutting down. May 16 00:42:47.915993 systemd[1]: Stopping ignition-mount.service... May 16 00:42:47.916917 systemd[1]: Stopping iscsid.service... May 16 00:42:47.918322 systemd[1]: Stopping sysroot-boot.service... May 16 00:42:47.919057 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:42:47.919210 systemd[1]: Stopped systemd-udev-trigger.service. May 16 00:42:47.920166 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:42:47.920258 systemd[1]: Stopped dracut-pre-trigger.service. May 16 00:42:47.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.924087 ignition[902]: INFO : Ignition 2.14.0 May 16 00:42:47.924087 ignition[902]: INFO : Stage: umount May 16 00:42:47.924087 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:42:47.924087 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:42:47.924087 ignition[902]: INFO : umount: umount passed May 16 00:42:47.924087 ignition[902]: INFO : Ignition finished successfully May 16 00:42:47.922569 systemd[1]: iscsid.service: Deactivated successfully. May 16 00:42:47.922660 systemd[1]: Stopped iscsid.service. May 16 00:42:47.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.923836 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:42:47.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.923911 systemd[1]: Closed iscsid.socket. May 16 00:42:47.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.924614 systemd[1]: Stopping iscsiuio.service... May 16 00:42:47.929375 systemd[1]: iscsiuio.service: Deactivated successfully. May 16 00:42:47.929460 systemd[1]: Stopped iscsiuio.service. May 16 00:42:47.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.930223 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:42:47.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.930295 systemd[1]: Stopped ignition-mount.service. May 16 00:42:47.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.931411 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:42:47.931485 systemd[1]: Finished initrd-cleanup.service. May 16 00:42:47.933225 systemd[1]: Stopped target network.target. May 16 00:42:47.934542 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:42:47.934578 systemd[1]: Closed iscsiuio.socket. May 16 00:42:47.935362 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:42:47.935405 systemd[1]: Stopped ignition-disks.service. May 16 00:42:47.937791 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:42:47.937833 systemd[1]: Stopped ignition-kargs.service. May 16 00:42:47.938916 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:42:47.938955 systemd[1]: Stopped ignition-setup.service. May 16 00:42:47.941017 systemd[1]: Stopping systemd-networkd.service... May 16 00:42:47.942286 systemd[1]: Stopping systemd-resolved.service... May 16 00:42:47.944454 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:42:47.951282 systemd-networkd[737]: eth0: DHCPv6 lease lost May 16 00:42:47.953114 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:42:47.953213 systemd[1]: Stopped systemd-resolved.service. May 16 00:42:47.956754 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:42:47.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.956845 systemd[1]: Stopped systemd-networkd.service. May 16 00:42:47.958498 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:42:47.959000 audit: BPF prog-id=6 op=UNLOAD May 16 00:42:47.958523 systemd[1]: Closed systemd-networkd.socket. May 16 00:42:47.961000 audit: BPF prog-id=9 op=UNLOAD May 16 00:42:47.960883 systemd[1]: Stopping network-cleanup.service... May 16 00:42:47.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.961768 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:42:47.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.961832 systemd[1]: Stopped parse-ip-for-networkd.service. May 16 00:42:47.963230 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:42:47.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.963271 systemd[1]: Stopped systemd-sysctl.service. May 16 00:42:47.965709 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:42:47.965755 systemd[1]: Stopped systemd-modules-load.service. May 16 00:42:47.966933 systemd[1]: Stopping systemd-udevd.service... May 16 00:42:47.972610 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 00:42:47.975199 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:42:47.975308 systemd[1]: Stopped network-cleanup.service. May 16 00:42:47.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.982738 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:42:47.982880 systemd[1]: Stopped systemd-udevd.service. May 16 00:42:47.983765 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:42:47.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.983806 systemd[1]: Closed systemd-udevd-control.socket. May 16 00:42:47.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.984572 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:42:47.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.984603 systemd[1]: Closed systemd-udevd-kernel.socket. May 16 00:42:47.985826 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:42:47.985866 systemd[1]: Stopped dracut-pre-udev.service. May 16 00:42:47.986983 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:42:47.987020 systemd[1]: Stopped dracut-cmdline.service. May 16 00:42:47.988224 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:42:47.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.988258 systemd[1]: Stopped dracut-cmdline-ask.service. May 16 00:42:47.990028 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 16 00:42:47.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.991081 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 00:42:47.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.991135 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 16 00:42:47.992844 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:42:48.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:47.992881 systemd[1]: Stopped kmod-static-nodes.service. May 16 00:42:47.994182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:42:47.994221 systemd[1]: Stopped systemd-vconsole-setup.service. May 16 00:42:47.995940 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 00:42:47.996370 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:42:47.996454 systemd[1]: Stopped sysroot-boot.service. May 16 00:42:47.997478 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:42:47.997549 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 16 00:42:47.998830 systemd[1]: Reached target initrd-switch-root.target. May 16 00:42:48.000183 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:42:48.000233 systemd[1]: Stopped initrd-setup-root.service. May 16 00:42:48.008000 audit: BPF prog-id=5 op=UNLOAD May 16 00:42:48.008000 audit: BPF prog-id=4 op=UNLOAD May 16 00:42:48.008000 audit: BPF prog-id=3 op=UNLOAD May 16 00:42:48.002119 systemd[1]: Starting initrd-switch-root.service... May 16 00:42:48.009000 audit: BPF prog-id=8 op=UNLOAD May 16 00:42:48.009000 audit: BPF prog-id=7 op=UNLOAD May 16 00:42:48.007442 systemd[1]: Switching root. May 16 00:42:48.038118 systemd-journald[289]: Journal stopped May 16 00:42:50.084846 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). May 16 00:42:50.084914 kernel: SELinux: Class mctp_socket not defined in policy. May 16 00:42:50.084929 kernel: SELinux: Class anon_inode not defined in policy. May 16 00:42:50.084939 kernel: SELinux: the above unknown classes and permissions will be allowed May 16 00:42:50.084949 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:42:50.084958 kernel: SELinux: policy capability open_perms=1 May 16 00:42:50.084994 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:42:50.085004 kernel: SELinux: policy capability always_check_network=0 May 16 00:42:50.085014 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:42:50.085026 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:42:50.085039 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:42:50.085054 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:42:50.085068 systemd[1]: Successfully loaded SELinux policy in 31.846ms. May 16 00:42:50.085085 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.763ms. May 16 00:42:50.085099 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 16 00:42:50.085112 systemd[1]: Detected virtualization kvm. May 16 00:42:50.085122 systemd[1]: Detected architecture arm64. May 16 00:42:50.085133 systemd[1]: Detected first boot. May 16 00:42:50.085143 systemd[1]: Initializing machine ID from VM UUID. May 16 00:42:50.085155 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 16 00:42:50.085166 systemd[1]: Populated /etc with preset unit settings. May 16 00:42:50.085177 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:42:50.085189 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:42:50.085201 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:42:50.085213 systemd[1]: Queued start job for default target multi-user.target. May 16 00:42:50.085224 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 16 00:42:50.085235 systemd[1]: Created slice system-addon\x2dconfig.slice. May 16 00:42:50.085246 systemd[1]: Created slice system-addon\x2drun.slice. May 16 00:42:50.085256 systemd[1]: Created slice system-getty.slice. May 16 00:42:50.085266 systemd[1]: Created slice system-modprobe.slice. May 16 00:42:50.085277 systemd[1]: Created slice system-serial\x2dgetty.slice. May 16 00:42:50.085289 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 16 00:42:50.085301 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 16 00:42:50.085311 systemd[1]: Created slice user.slice. May 16 00:42:50.085322 systemd[1]: Started systemd-ask-password-console.path. May 16 00:42:50.085332 systemd[1]: Started systemd-ask-password-wall.path. May 16 00:42:50.085343 systemd[1]: Set up automount boot.automount. May 16 00:42:50.085354 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 16 00:42:50.085364 systemd[1]: Reached target integritysetup.target. May 16 00:42:50.085375 systemd[1]: Reached target remote-cryptsetup.target. May 16 00:42:50.085387 systemd[1]: Reached target remote-fs.target. May 16 00:42:50.085398 systemd[1]: Reached target slices.target. May 16 00:42:50.085409 systemd[1]: Reached target swap.target. May 16 00:42:50.085420 systemd[1]: Reached target torcx.target. May 16 00:42:50.085430 systemd[1]: Reached target veritysetup.target. May 16 00:42:50.085440 systemd[1]: Listening on systemd-coredump.socket. May 16 00:42:50.085451 systemd[1]: Listening on systemd-initctl.socket. May 16 00:42:50.085461 systemd[1]: Listening on systemd-journald-audit.socket. May 16 00:42:50.085471 systemd[1]: Listening on systemd-journald-dev-log.socket. May 16 00:42:50.085483 systemd[1]: Listening on systemd-journald.socket. May 16 00:42:50.085493 systemd[1]: Listening on systemd-networkd.socket. May 16 00:42:50.085504 systemd[1]: Listening on systemd-udevd-control.socket. May 16 00:42:50.085515 systemd[1]: Listening on systemd-udevd-kernel.socket. May 16 00:42:50.085526 systemd[1]: Listening on systemd-userdbd.socket. May 16 00:42:50.085537 systemd[1]: Mounting dev-hugepages.mount... May 16 00:42:50.085547 systemd[1]: Mounting dev-mqueue.mount... May 16 00:42:50.085558 systemd[1]: Mounting media.mount... May 16 00:42:50.085568 systemd[1]: Mounting sys-kernel-debug.mount... May 16 00:42:50.085580 systemd[1]: Mounting sys-kernel-tracing.mount... May 16 00:42:50.085591 systemd[1]: Mounting tmp.mount... May 16 00:42:50.085602 systemd[1]: Starting flatcar-tmpfiles.service... May 16 00:42:50.085613 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:42:50.085624 systemd[1]: Starting kmod-static-nodes.service... May 16 00:42:50.085635 systemd[1]: Starting modprobe@configfs.service... May 16 00:42:50.085646 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:42:50.085657 systemd[1]: Starting modprobe@drm.service... May 16 00:42:50.085667 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:42:50.085679 systemd[1]: Starting modprobe@fuse.service... May 16 00:42:50.085690 systemd[1]: Starting modprobe@loop.service... May 16 00:42:50.085702 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:42:50.085713 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 16 00:42:50.085723 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 16 00:42:50.085734 systemd[1]: Starting systemd-journald.service... May 16 00:42:50.085745 systemd[1]: Starting systemd-modules-load.service... May 16 00:42:50.085755 systemd[1]: Starting systemd-network-generator.service... May 16 00:42:50.085766 systemd[1]: Starting systemd-remount-fs.service... May 16 00:42:50.085777 kernel: fuse: init (API version 7.34) May 16 00:42:50.085789 systemd[1]: Starting systemd-udev-trigger.service... May 16 00:42:50.085799 systemd[1]: Mounted dev-hugepages.mount. May 16 00:42:50.085809 kernel: loop: module loaded May 16 00:42:50.085820 systemd[1]: Mounted dev-mqueue.mount. May 16 00:42:50.085830 systemd[1]: Mounted media.mount. May 16 00:42:50.085841 systemd[1]: Mounted sys-kernel-debug.mount. May 16 00:42:50.085851 systemd[1]: Mounted sys-kernel-tracing.mount. May 16 00:42:50.085862 systemd[1]: Mounted tmp.mount. May 16 00:42:50.085873 systemd[1]: Finished kmod-static-nodes.service. May 16 00:42:50.085897 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:42:50.085910 systemd[1]: Finished modprobe@configfs.service. May 16 00:42:50.085922 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:42:50.085932 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:42:50.085943 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:42:50.085953 systemd[1]: Finished modprobe@drm.service. May 16 00:42:50.085975 systemd-journald[1034]: Journal started May 16 00:42:50.086019 systemd-journald[1034]: Runtime Journal (/run/log/journal/a7ccc5166cbf486084674c2b037231e1) is 6.0M, max 48.7M, 42.6M free. May 16 00:42:49.988000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 16 00:42:49.988000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 16 00:42:50.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.077000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 16 00:42:50.077000 audit[1034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe0e770d0 a2=4000 a3=1 items=0 ppid=1 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:50.077000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 16 00:42:50.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.087272 systemd[1]: Started systemd-journald.service. May 16 00:42:50.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.088186 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:42:50.088421 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:42:50.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.089336 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:42:50.089536 systemd[1]: Finished modprobe@fuse.service. May 16 00:42:50.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.090373 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:42:50.090557 systemd[1]: Finished modprobe@loop.service. May 16 00:42:50.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.091550 systemd[1]: Finished systemd-modules-load.service. May 16 00:42:50.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.092602 systemd[1]: Finished systemd-network-generator.service. May 16 00:42:50.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.093764 systemd[1]: Finished systemd-remount-fs.service. May 16 00:42:50.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.094740 systemd[1]: Reached target network-pre.target. May 16 00:42:50.098909 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 16 00:42:50.100688 systemd[1]: Mounting sys-kernel-config.mount... May 16 00:42:50.101396 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:42:50.104730 systemd[1]: Starting systemd-hwdb-update.service... May 16 00:42:50.106667 systemd[1]: Starting systemd-journal-flush.service... May 16 00:42:50.107346 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:42:50.108704 systemd[1]: Starting systemd-random-seed.service... May 16 00:42:50.109505 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:42:50.110835 systemd[1]: Starting systemd-sysctl.service... May 16 00:42:50.112905 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 16 00:42:50.115772 systemd[1]: Mounted sys-kernel-config.mount. May 16 00:42:50.119250 systemd[1]: Finished flatcar-tmpfiles.service. May 16 00:42:50.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.120207 systemd[1]: Finished systemd-random-seed.service. May 16 00:42:50.121097 systemd[1]: Reached target first-boot-complete.target. May 16 00:42:50.122515 systemd-journald[1034]: Time spent on flushing to /var/log/journal/a7ccc5166cbf486084674c2b037231e1 is 22.007ms for 942 entries. May 16 00:42:50.122515 systemd-journald[1034]: System Journal (/var/log/journal/a7ccc5166cbf486084674c2b037231e1) is 8.0M, max 195.6M, 187.6M free. May 16 00:42:50.157452 systemd-journald[1034]: Received client request to flush runtime journal. May 16 00:42:50.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.122950 systemd[1]: Starting systemd-sysusers.service... May 16 00:42:50.137095 systemd[1]: Finished systemd-udev-trigger.service. May 16 00:42:50.157992 udevadm[1085]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 16 00:42:50.139016 systemd[1]: Starting systemd-udev-settle.service... May 16 00:42:50.141950 systemd[1]: Finished systemd-sysctl.service. May 16 00:42:50.153906 systemd[1]: Finished systemd-sysusers.service. May 16 00:42:50.156173 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 16 00:42:50.159247 systemd[1]: Finished systemd-journal-flush.service. May 16 00:42:50.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.174829 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 16 00:42:50.477035 systemd[1]: Finished systemd-hwdb-update.service. May 16 00:42:50.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.478941 systemd[1]: Starting systemd-udevd.service... May 16 00:42:50.502154 systemd-udevd[1094]: Using default interface naming scheme 'v252'. May 16 00:42:50.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.519577 systemd[1]: Started systemd-udevd.service. May 16 00:42:50.521752 systemd[1]: Starting systemd-networkd.service... May 16 00:42:50.539878 systemd[1]: Starting systemd-userdbd.service... May 16 00:42:50.556358 systemd[1]: Found device dev-ttyAMA0.device. May 16 00:42:50.591587 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 16 00:42:50.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.593355 systemd[1]: Started systemd-userdbd.service. May 16 00:42:50.650920 systemd[1]: Finished systemd-udev-settle.service. May 16 00:42:50.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.653077 systemd[1]: Starting lvm2-activation-early.service... May 16 00:42:50.665656 lvm[1128]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:42:50.670116 systemd-networkd[1102]: lo: Link UP May 16 00:42:50.670125 systemd-networkd[1102]: lo: Gained carrier May 16 00:42:50.670446 systemd-networkd[1102]: Enumeration completed May 16 00:42:50.670550 systemd-networkd[1102]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:42:50.670566 systemd[1]: Started systemd-networkd.service. May 16 00:42:50.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.675239 systemd-networkd[1102]: eth0: Link UP May 16 00:42:50.675250 systemd-networkd[1102]: eth0: Gained carrier May 16 00:42:50.689730 systemd[1]: Finished lvm2-activation-early.service. May 16 00:42:50.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.690762 systemd[1]: Reached target cryptsetup.target. May 16 00:42:50.692795 systemd[1]: Starting lvm2-activation.service... May 16 00:42:50.696079 systemd-networkd[1102]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:42:50.696322 lvm[1130]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:42:50.720934 systemd[1]: Finished lvm2-activation.service. May 16 00:42:50.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.721971 systemd[1]: Reached target local-fs-pre.target. May 16 00:42:50.722837 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:42:50.722874 systemd[1]: Reached target local-fs.target. May 16 00:42:50.723740 systemd[1]: Reached target machines.target. May 16 00:42:50.725840 systemd[1]: Starting ldconfig.service... May 16 00:42:50.726887 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:42:50.726985 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:42:50.728339 systemd[1]: Starting systemd-boot-update.service... May 16 00:42:50.730468 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 16 00:42:50.733040 systemd[1]: Starting systemd-machine-id-commit.service... May 16 00:42:50.735374 systemd[1]: Starting systemd-sysext.service... May 16 00:42:50.738562 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1133 (bootctl) May 16 00:42:50.741781 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 16 00:42:50.747158 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 16 00:42:50.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.751356 systemd[1]: Unmounting usr-share-oem.mount... May 16 00:42:50.757613 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 16 00:42:50.757867 systemd[1]: Unmounted usr-share-oem.mount. May 16 00:42:50.819987 kernel: loop0: detected capacity change from 0 to 203944 May 16 00:42:50.826508 systemd[1]: Finished systemd-machine-id-commit.service. May 16 00:42:50.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.831677 systemd-fsck[1142]: fsck.fat 4.2 (2021-01-31) May 16 00:42:50.831677 systemd-fsck[1142]: /dev/vda1: 236 files, 117310/258078 clusters May 16 00:42:50.832044 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:42:50.834238 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 16 00:42:50.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.847989 kernel: loop1: detected capacity change from 0 to 203944 May 16 00:42:50.854208 (sd-sysext)[1151]: Using extensions 'kubernetes'. May 16 00:42:50.854775 (sd-sysext)[1151]: Merged extensions into '/usr'. May 16 00:42:50.869935 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:42:50.871213 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:42:50.873022 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:42:50.874792 systemd[1]: Starting modprobe@loop.service... May 16 00:42:50.875444 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:42:50.875573 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:42:50.876321 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:42:50.876459 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:42:50.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.877630 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:42:50.877758 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:42:50.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.879010 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:42:50.879171 systemd[1]: Finished modprobe@loop.service. May 16 00:42:50.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:50.880222 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:42:50.880318 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:42:50.946539 ldconfig[1132]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:42:50.954160 systemd[1]: Finished ldconfig.service. May 16 00:42:50.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.063813 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:42:51.065757 systemd[1]: Mounting boot.mount... May 16 00:42:51.067546 systemd[1]: Mounting usr-share-oem.mount... May 16 00:42:51.072351 systemd[1]: Mounted usr-share-oem.mount. May 16 00:42:51.074027 systemd[1]: Finished systemd-sysext.service. May 16 00:42:51.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.075773 systemd[1]: Starting ensure-sysext.service... May 16 00:42:51.077379 systemd[1]: Starting systemd-tmpfiles-setup.service... May 16 00:42:51.081458 systemd[1]: Mounted boot.mount. May 16 00:42:51.083588 systemd[1]: Reloading. May 16 00:42:51.086693 systemd-tmpfiles[1168]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 16 00:42:51.087456 systemd-tmpfiles[1168]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:42:51.092412 systemd-tmpfiles[1168]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:42:51.119426 /usr/lib/systemd/system-generators/torcx-generator[1189]: time="2025-05-16T00:42:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:42:51.119461 /usr/lib/systemd/system-generators/torcx-generator[1189]: time="2025-05-16T00:42:51Z" level=info msg="torcx already run" May 16 00:42:51.193550 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:42:51.193845 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:42:51.212189 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:42:51.253484 systemd[1]: Finished systemd-boot-update.service. May 16 00:42:51.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.255718 systemd[1]: Finished systemd-tmpfiles-setup.service. May 16 00:42:51.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.258668 systemd[1]: Starting audit-rules.service... May 16 00:42:51.260470 systemd[1]: Starting clean-ca-certificates.service... May 16 00:42:51.262279 systemd[1]: Starting systemd-journal-catalog-update.service... May 16 00:42:51.264543 systemd[1]: Starting systemd-resolved.service... May 16 00:42:51.266869 systemd[1]: Starting systemd-timesyncd.service... May 16 00:42:51.268849 systemd[1]: Starting systemd-update-utmp.service... May 16 00:42:51.270083 systemd[1]: Finished clean-ca-certificates.service. May 16 00:42:51.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.272738 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:42:51.277000 audit[1245]: SYSTEM_BOOT pid=1245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 16 00:42:51.280200 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:42:51.281602 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:42:51.283358 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:42:51.285054 systemd[1]: Starting modprobe@loop.service... May 16 00:42:51.285676 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:42:51.285868 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:42:51.286046 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:42:51.288394 systemd[1]: Finished systemd-journal-catalog-update.service. May 16 00:42:51.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.289718 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:42:51.289848 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:42:51.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.291073 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:42:51.291206 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:42:51.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.292337 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:42:51.292485 systemd[1]: Finished modprobe@loop.service. May 16 00:42:51.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.293676 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:42:51.293788 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:42:51.294950 systemd[1]: Starting systemd-update-done.service... May 16 00:42:51.296252 systemd[1]: Finished systemd-update-utmp.service. May 16 00:42:51.298825 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:42:51.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.300023 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:42:51.301674 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:42:51.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.303428 systemd[1]: Starting modprobe@loop.service... May 16 00:42:51.304116 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:42:51.304240 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:42:51.304324 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:42:51.305162 systemd[1]: Finished systemd-update-done.service. May 16 00:42:51.306228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:42:51.306350 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:42:51.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.307346 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:42:51.307465 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:42:51.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.308653 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:42:51.308797 systemd[1]: Finished modprobe@loop.service. May 16 00:42:51.309790 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:42:51.309874 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:42:51.312491 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:42:51.313585 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:42:51.315409 systemd[1]: Starting modprobe@drm.service... May 16 00:42:51.317174 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:42:51.318950 systemd[1]: Starting modprobe@loop.service... May 16 00:42:51.319601 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:42:51.319738 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:42:51.321054 systemd[1]: Starting systemd-networkd-wait-online.service... May 16 00:42:51.325165 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:42:51.326389 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:42:51.326534 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:42:51.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.327542 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:42:51.327678 systemd[1]: Finished modprobe@drm.service. May 16 00:42:51.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.328717 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:42:51.328848 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:42:51.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.329936 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:42:51.330094 systemd[1]: Finished modprobe@loop.service. May 16 00:42:51.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:51.330000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 16 00:42:51.330000 audit[1285]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd0d61f90 a2=420 a3=0 items=0 ppid=1236 pid=1285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:51.330000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 16 00:42:51.331137 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:42:51.333267 augenrules[1285]: No rules May 16 00:42:51.331223 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:42:51.332335 systemd[1]: Finished ensure-sysext.service. May 16 00:42:51.333235 systemd[1]: Finished audit-rules.service. May 16 00:42:51.357252 systemd[1]: Started systemd-timesyncd.service. May 16 00:42:51.357886 systemd-timesyncd[1242]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 00:42:51.357948 systemd-timesyncd[1242]: Initial clock synchronization to Fri 2025-05-16 00:42:51.201471 UTC. May 16 00:42:51.358269 systemd[1]: Reached target time-set.target. May 16 00:42:51.359513 systemd-resolved[1241]: Positive Trust Anchors: May 16 00:42:51.359524 systemd-resolved[1241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:42:51.359550 systemd-resolved[1241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 16 00:42:51.371030 systemd-resolved[1241]: Defaulting to hostname 'linux'. May 16 00:42:51.372415 systemd[1]: Started systemd-resolved.service. May 16 00:42:51.373068 systemd[1]: Reached target network.target. May 16 00:42:51.373607 systemd[1]: Reached target nss-lookup.target. May 16 00:42:51.374186 systemd[1]: Reached target sysinit.target. May 16 00:42:51.374776 systemd[1]: Started motdgen.path. May 16 00:42:51.375449 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 16 00:42:51.376502 systemd[1]: Started logrotate.timer. May 16 00:42:51.377133 systemd[1]: Started mdadm.timer. May 16 00:42:51.377625 systemd[1]: Started systemd-tmpfiles-clean.timer. May 16 00:42:51.378302 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:42:51.378337 systemd[1]: Reached target paths.target. May 16 00:42:51.378943 systemd[1]: Reached target timers.target. May 16 00:42:51.379784 systemd[1]: Listening on dbus.socket. May 16 00:42:51.381566 systemd[1]: Starting docker.socket... May 16 00:42:51.383153 systemd[1]: Listening on sshd.socket. May 16 00:42:51.383936 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:42:51.384261 systemd[1]: Listening on docker.socket. May 16 00:42:51.384914 systemd[1]: Reached target sockets.target. May 16 00:42:51.385606 systemd[1]: Reached target basic.target. May 16 00:42:51.386333 systemd[1]: System is tainted: cgroupsv1 May 16 00:42:51.386381 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 16 00:42:51.386402 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 16 00:42:51.387468 systemd[1]: Starting containerd.service... May 16 00:42:51.389109 systemd[1]: Starting dbus.service... May 16 00:42:51.391019 systemd[1]: Starting enable-oem-cloudinit.service... May 16 00:42:51.392847 systemd[1]: Starting extend-filesystems.service... May 16 00:42:51.393749 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 16 00:42:51.395220 systemd[1]: Starting motdgen.service... May 16 00:42:51.397345 systemd[1]: Starting prepare-helm.service... May 16 00:42:51.399595 systemd[1]: Starting ssh-key-proc-cmdline.service... May 16 00:42:51.401441 systemd[1]: Starting sshd-keygen.service... May 16 00:42:51.404014 systemd[1]: Starting systemd-logind.service... May 16 00:42:51.407055 jq[1299]: false May 16 00:42:51.404572 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:42:51.404646 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:42:51.406065 systemd[1]: Starting update-engine.service... May 16 00:42:51.409469 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 16 00:42:51.414211 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:42:51.417716 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 16 00:42:51.419225 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:42:51.419443 systemd[1]: Finished ssh-key-proc-cmdline.service. May 16 00:42:51.423452 extend-filesystems[1300]: Found loop1 May 16 00:42:51.423452 extend-filesystems[1300]: Found vda May 16 00:42:51.423452 extend-filesystems[1300]: Found vda1 May 16 00:42:51.423452 extend-filesystems[1300]: Found vda2 May 16 00:42:51.423452 extend-filesystems[1300]: Found vda3 May 16 00:42:51.423452 extend-filesystems[1300]: Found usr May 16 00:42:51.423452 extend-filesystems[1300]: Found vda4 May 16 00:42:51.423452 extend-filesystems[1300]: Found vda6 May 16 00:42:51.423452 extend-filesystems[1300]: Found vda7 May 16 00:42:51.423452 extend-filesystems[1300]: Found vda9 May 16 00:42:51.423452 extend-filesystems[1300]: Checking size of /dev/vda9 May 16 00:42:51.443623 jq[1314]: true May 16 00:42:51.443716 tar[1320]: linux-arm64/helm May 16 00:42:51.444042 jq[1330]: true May 16 00:42:51.448085 dbus-daemon[1298]: [system] SELinux support is enabled May 16 00:42:51.448340 systemd[1]: Started dbus.service. May 16 00:42:51.450727 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:42:51.450757 systemd[1]: Reached target system-config.target. May 16 00:42:51.451549 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:42:51.451564 systemd[1]: Reached target user-config.target. May 16 00:42:51.452474 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:42:51.452711 systemd[1]: Finished motdgen.service. May 16 00:42:51.469594 extend-filesystems[1300]: Resized partition /dev/vda9 May 16 00:42:51.479680 extend-filesystems[1355]: resize2fs 1.46.5 (30-Dec-2021) May 16 00:42:51.481350 systemd-logind[1310]: Watching system buttons on /dev/input/event0 (Power Button) May 16 00:42:51.483791 systemd-logind[1310]: New seat seat0. May 16 00:42:51.485254 systemd[1]: Started systemd-logind.service. May 16 00:42:51.503977 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 00:42:51.510706 update_engine[1312]: I0516 00:42:51.510506 1312 main.cc:92] Flatcar Update Engine starting May 16 00:42:51.529493 update_engine[1312]: I0516 00:42:51.521108 1312 update_check_scheduler.cc:74] Next update check in 9m25s May 16 00:42:51.513110 systemd[1]: Started update-engine.service. May 16 00:42:51.515540 systemd[1]: Started locksmithd.service. May 16 00:42:51.538706 bash[1354]: Updated "/home/core/.ssh/authorized_keys" May 16 00:42:51.539167 env[1326]: time="2025-05-16T00:42:51.539120200Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 16 00:42:51.559268 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 00:42:51.540449 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 16 00:42:51.562055 extend-filesystems[1355]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 00:42:51.562055 extend-filesystems[1355]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 00:42:51.562055 extend-filesystems[1355]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 00:42:51.565030 extend-filesystems[1300]: Resized filesystem in /dev/vda9 May 16 00:42:51.562741 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:42:51.565764 env[1326]: time="2025-05-16T00:42:51.565585840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 00:42:51.565764 env[1326]: time="2025-05-16T00:42:51.565733920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 00:42:51.563005 systemd[1]: Finished extend-filesystems.service. May 16 00:42:51.566831 env[1326]: time="2025-05-16T00:42:51.566795520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 00:42:51.566831 env[1326]: time="2025-05-16T00:42:51.566827280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 00:42:51.567230 env[1326]: time="2025-05-16T00:42:51.567203720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:42:51.567270 env[1326]: time="2025-05-16T00:42:51.567229360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 00:42:51.567270 env[1326]: time="2025-05-16T00:42:51.567244400Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 16 00:42:51.567270 env[1326]: time="2025-05-16T00:42:51.567253960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 00:42:51.567351 env[1326]: time="2025-05-16T00:42:51.567334120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 00:42:51.567631 env[1326]: time="2025-05-16T00:42:51.567610040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 00:42:51.567779 env[1326]: time="2025-05-16T00:42:51.567757320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:42:51.567779 env[1326]: time="2025-05-16T00:42:51.567777640Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 00:42:51.567846 env[1326]: time="2025-05-16T00:42:51.567828560Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 16 00:42:51.567883 env[1326]: time="2025-05-16T00:42:51.567845400Z" level=info msg="metadata content store policy set" policy=shared May 16 00:42:51.571043 env[1326]: time="2025-05-16T00:42:51.571017440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 00:42:51.571075 env[1326]: time="2025-05-16T00:42:51.571051200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 00:42:51.571075 env[1326]: time="2025-05-16T00:42:51.571071880Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 00:42:51.571133 env[1326]: time="2025-05-16T00:42:51.571116680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 00:42:51.571160 env[1326]: time="2025-05-16T00:42:51.571132720Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 00:42:51.571160 env[1326]: time="2025-05-16T00:42:51.571146240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 00:42:51.571197 env[1326]: time="2025-05-16T00:42:51.571158440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 00:42:51.571701 env[1326]: time="2025-05-16T00:42:51.571678520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 00:42:51.571728 env[1326]: time="2025-05-16T00:42:51.571707600Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 16 00:42:51.571728 env[1326]: time="2025-05-16T00:42:51.571721400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 00:42:51.571770 env[1326]: time="2025-05-16T00:42:51.571736040Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 00:42:51.571770 env[1326]: time="2025-05-16T00:42:51.571751880Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 00:42:51.571908 env[1326]: time="2025-05-16T00:42:51.571880360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 00:42:51.572125 env[1326]: time="2025-05-16T00:42:51.572106640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 00:42:51.572425 env[1326]: time="2025-05-16T00:42:51.572406080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 00:42:51.572460 env[1326]: time="2025-05-16T00:42:51.572437640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 00:42:51.572460 env[1326]: time="2025-05-16T00:42:51.572451600Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 00:42:51.572619 env[1326]: time="2025-05-16T00:42:51.572603320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 00:42:51.572647 env[1326]: time="2025-05-16T00:42:51.572620320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 00:42:51.572647 env[1326]: time="2025-05-16T00:42:51.572633200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 00:42:51.572647 env[1326]: time="2025-05-16T00:42:51.572644320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 00:42:51.572709 env[1326]: time="2025-05-16T00:42:51.572658000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 00:42:51.572709 env[1326]: time="2025-05-16T00:42:51.572670160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 00:42:51.572709 env[1326]: time="2025-05-16T00:42:51.572681840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 00:42:51.572709 env[1326]: time="2025-05-16T00:42:51.572695400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 00:42:51.572787 env[1326]: time="2025-05-16T00:42:51.572708080Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 00:42:51.572846 env[1326]: time="2025-05-16T00:42:51.572825280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 00:42:51.572846 env[1326]: time="2025-05-16T00:42:51.572841160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 00:42:51.572888 env[1326]: time="2025-05-16T00:42:51.572861280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 00:42:51.572888 env[1326]: time="2025-05-16T00:42:51.572873400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 00:42:51.572941 env[1326]: time="2025-05-16T00:42:51.572887480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 16 00:42:51.572941 env[1326]: time="2025-05-16T00:42:51.572908240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 00:42:51.572941 env[1326]: time="2025-05-16T00:42:51.572925920Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 16 00:42:51.573055 env[1326]: time="2025-05-16T00:42:51.573033000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 00:42:51.573418 env[1326]: time="2025-05-16T00:42:51.573367480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 00:42:51.574173 env[1326]: time="2025-05-16T00:42:51.573425440Z" level=info msg="Connect containerd service" May 16 00:42:51.574173 env[1326]: time="2025-05-16T00:42:51.573457200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 00:42:51.574328 env[1326]: time="2025-05-16T00:42:51.574297280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:42:51.574590 env[1326]: time="2025-05-16T00:42:51.574548200Z" level=info msg="Start subscribing containerd event" May 16 00:42:51.574620 env[1326]: time="2025-05-16T00:42:51.574606640Z" level=info msg="Start recovering state" May 16 00:42:51.574806 env[1326]: time="2025-05-16T00:42:51.574782120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:42:51.574885 env[1326]: time="2025-05-16T00:42:51.574864640Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:42:51.575023 systemd[1]: Started containerd.service. May 16 00:42:51.575825 env[1326]: time="2025-05-16T00:42:51.575800000Z" level=info msg="containerd successfully booted in 0.039999s" May 16 00:42:51.575825 env[1326]: time="2025-05-16T00:42:51.575789040Z" level=info msg="Start event monitor" May 16 00:42:51.575873 env[1326]: time="2025-05-16T00:42:51.575844000Z" level=info msg="Start snapshots syncer" May 16 00:42:51.575873 env[1326]: time="2025-05-16T00:42:51.575861080Z" level=info msg="Start cni network conf syncer for default" May 16 00:42:51.575920 env[1326]: time="2025-05-16T00:42:51.575871400Z" level=info msg="Start streaming server" May 16 00:42:51.619475 locksmithd[1357]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:42:51.852009 tar[1320]: linux-arm64/LICENSE May 16 00:42:51.852090 tar[1320]: linux-arm64/README.md May 16 00:42:51.856278 systemd[1]: Finished prepare-helm.service. May 16 00:42:52.588141 systemd-networkd[1102]: eth0: Gained IPv6LL May 16 00:42:52.589830 systemd[1]: Finished systemd-networkd-wait-online.service. May 16 00:42:52.590892 systemd[1]: Reached target network-online.target. May 16 00:42:52.593263 systemd[1]: Starting kubelet.service... May 16 00:42:53.060583 sshd_keygen[1317]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:42:53.078385 systemd[1]: Finished sshd-keygen.service. May 16 00:42:53.080705 systemd[1]: Starting issuegen.service... May 16 00:42:53.085718 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:42:53.085935 systemd[1]: Finished issuegen.service. May 16 00:42:53.088200 systemd[1]: Starting systemd-user-sessions.service... May 16 00:42:53.096503 systemd[1]: Finished systemd-user-sessions.service. May 16 00:42:53.098761 systemd[1]: Started getty@tty1.service. May 16 00:42:53.101109 systemd[1]: Started serial-getty@ttyAMA0.service. May 16 00:42:53.101999 systemd[1]: Reached target getty.target. May 16 00:42:53.257553 systemd[1]: Started kubelet.service. May 16 00:42:53.258753 systemd[1]: Reached target multi-user.target. May 16 00:42:53.260785 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 16 00:42:53.267325 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 16 00:42:53.267572 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 16 00:42:53.268547 systemd[1]: Startup finished in 7.125s (kernel) + 5.170s (userspace) = 12.295s. May 16 00:42:53.727568 kubelet[1399]: E0516 00:42:53.727516 1399 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:42:53.729459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:42:53.729605 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:42:54.215141 systemd[1]: Created slice system-sshd.slice. May 16 00:42:54.216322 systemd[1]: Started sshd@0-10.0.0.81:22-10.0.0.1:49970.service. May 16 00:42:54.263807 sshd[1409]: Accepted publickey for core from 10.0.0.1 port 49970 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:42:54.265733 sshd[1409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:42:54.274703 systemd-logind[1310]: New session 1 of user core. May 16 00:42:54.275535 systemd[1]: Created slice user-500.slice. May 16 00:42:54.276532 systemd[1]: Starting user-runtime-dir@500.service... May 16 00:42:54.285343 systemd[1]: Finished user-runtime-dir@500.service. May 16 00:42:54.286639 systemd[1]: Starting user@500.service... May 16 00:42:54.293321 (systemd)[1414]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:42:54.356024 systemd[1414]: Queued start job for default target default.target. May 16 00:42:54.356275 systemd[1414]: Reached target paths.target. May 16 00:42:54.356291 systemd[1414]: Reached target sockets.target. May 16 00:42:54.356302 systemd[1414]: Reached target timers.target. May 16 00:42:54.356311 systemd[1414]: Reached target basic.target. May 16 00:42:54.356353 systemd[1414]: Reached target default.target. May 16 00:42:54.356374 systemd[1414]: Startup finished in 57ms. May 16 00:42:54.356669 systemd[1]: Started user@500.service. May 16 00:42:54.357661 systemd[1]: Started session-1.scope. May 16 00:42:54.408663 systemd[1]: Started sshd@1-10.0.0.81:22-10.0.0.1:49982.service. May 16 00:42:54.460707 sshd[1423]: Accepted publickey for core from 10.0.0.1 port 49982 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:42:54.462337 sshd[1423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:42:54.466034 systemd-logind[1310]: New session 2 of user core. May 16 00:42:54.466545 systemd[1]: Started session-2.scope. May 16 00:42:54.519302 sshd[1423]: pam_unix(sshd:session): session closed for user core May 16 00:42:54.521437 systemd[1]: Started sshd@2-10.0.0.81:22-10.0.0.1:49990.service. May 16 00:42:54.522486 systemd[1]: sshd@1-10.0.0.81:22-10.0.0.1:49982.service: Deactivated successfully. May 16 00:42:54.523666 systemd[1]: session-2.scope: Deactivated successfully. May 16 00:42:54.524093 systemd-logind[1310]: Session 2 logged out. Waiting for processes to exit. May 16 00:42:54.525027 systemd-logind[1310]: Removed session 2. May 16 00:42:54.566882 sshd[1428]: Accepted publickey for core from 10.0.0.1 port 49990 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:42:54.568154 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:42:54.572248 systemd[1]: Started session-3.scope. May 16 00:42:54.572756 systemd-logind[1310]: New session 3 of user core. May 16 00:42:54.622082 sshd[1428]: pam_unix(sshd:session): session closed for user core May 16 00:42:54.624416 systemd[1]: Started sshd@3-10.0.0.81:22-10.0.0.1:49994.service. May 16 00:42:54.625189 systemd[1]: sshd@2-10.0.0.81:22-10.0.0.1:49990.service: Deactivated successfully. May 16 00:42:54.626051 systemd-logind[1310]: Session 3 logged out. Waiting for processes to exit. May 16 00:42:54.626235 systemd[1]: session-3.scope: Deactivated successfully. May 16 00:42:54.626858 systemd-logind[1310]: Removed session 3. May 16 00:42:54.664591 sshd[1435]: Accepted publickey for core from 10.0.0.1 port 49994 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:42:54.665913 sshd[1435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:42:54.669133 systemd-logind[1310]: New session 4 of user core. May 16 00:42:54.670067 systemd[1]: Started session-4.scope. May 16 00:42:54.729437 sshd[1435]: pam_unix(sshd:session): session closed for user core May 16 00:42:54.731839 systemd[1]: Started sshd@4-10.0.0.81:22-10.0.0.1:50010.service. May 16 00:42:54.732830 systemd[1]: sshd@3-10.0.0.81:22-10.0.0.1:49994.service: Deactivated successfully. May 16 00:42:54.733859 systemd-logind[1310]: Session 4 logged out. Waiting for processes to exit. May 16 00:42:54.734070 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:42:54.734979 systemd-logind[1310]: Removed session 4. May 16 00:42:54.772338 sshd[1442]: Accepted publickey for core from 10.0.0.1 port 50010 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:42:54.773544 sshd[1442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:42:54.777037 systemd-logind[1310]: New session 5 of user core. May 16 00:42:54.779148 systemd[1]: Started session-5.scope. May 16 00:42:54.855398 sudo[1448]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 00:42:54.855629 sudo[1448]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 16 00:42:54.871134 dbus-daemon[1298]: avc: received setenforce notice (enforcing=1) May 16 00:42:54.872155 sudo[1448]: pam_unix(sudo:session): session closed for user root May 16 00:42:54.874227 sshd[1442]: pam_unix(sshd:session): session closed for user core May 16 00:42:54.876670 systemd[1]: Started sshd@5-10.0.0.81:22-10.0.0.1:50014.service. May 16 00:42:54.877758 systemd[1]: sshd@4-10.0.0.81:22-10.0.0.1:50010.service: Deactivated successfully. May 16 00:42:54.879017 systemd-logind[1310]: Session 5 logged out. Waiting for processes to exit. May 16 00:42:54.879268 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:42:54.880271 systemd-logind[1310]: Removed session 5. May 16 00:42:54.917573 sshd[1450]: Accepted publickey for core from 10.0.0.1 port 50014 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:42:54.919207 sshd[1450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:42:54.922596 systemd-logind[1310]: New session 6 of user core. May 16 00:42:54.924507 systemd[1]: Started session-6.scope. May 16 00:42:54.979290 sudo[1457]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 00:42:54.979521 sudo[1457]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 16 00:42:54.982282 sudo[1457]: pam_unix(sudo:session): session closed for user root May 16 00:42:54.986891 sudo[1456]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 16 00:42:54.987142 sudo[1456]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 16 00:42:54.996127 systemd[1]: Stopping audit-rules.service... May 16 00:42:54.996000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 16 00:42:54.997503 auditctl[1460]: No rules May 16 00:42:54.998011 kernel: kauditd_printk_skb: 125 callbacks suppressed May 16 00:42:54.998062 kernel: audit: type=1305 audit(1747356174.996:157): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 16 00:42:54.998310 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:42:54.998540 systemd[1]: Stopped audit-rules.service. May 16 00:42:54.996000 audit[1460]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff79c61b0 a2=420 a3=0 items=0 ppid=1 pid=1460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.000123 systemd[1]: Starting audit-rules.service... May 16 00:42:55.002522 kernel: audit: type=1300 audit(1747356174.996:157): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff79c61b0 a2=420 a3=0 items=0 ppid=1 pid=1460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.002595 kernel: audit: type=1327 audit(1747356174.996:157): proctitle=2F7362696E2F617564697463746C002D44 May 16 00:42:54.996000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 May 16 00:42:55.003914 kernel: audit: type=1131 audit(1747356174.997:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:54.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:55.017272 augenrules[1478]: No rules May 16 00:42:55.018323 systemd[1]: Finished audit-rules.service. May 16 00:42:55.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:55.019776 sudo[1456]: pam_unix(sudo:session): session closed for user root May 16 00:42:55.018000 audit[1456]: USER_END pid=1456 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 16 00:42:55.021671 sshd[1450]: pam_unix(sshd:session): session closed for user core May 16 00:42:55.023653 kernel: audit: type=1130 audit(1747356175.017:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:55.023702 kernel: audit: type=1106 audit(1747356175.018:160): pid=1456 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 16 00:42:55.023731 kernel: audit: type=1104 audit(1747356175.018:161): pid=1456 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 16 00:42:55.018000 audit[1456]: CRED_DISP pid=1456 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 16 00:42:55.023673 systemd[1]: Started sshd@6-10.0.0.81:22-10.0.0.1:50030.service. May 16 00:42:55.024932 systemd-logind[1310]: Session 6 logged out. Waiting for processes to exit. May 16 00:42:55.025150 systemd[1]: sshd@5-10.0.0.81:22-10.0.0.1:50014.service: Deactivated successfully. May 16 00:42:55.022000 audit[1450]: USER_END pid=1450 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:42:55.025819 systemd[1]: session-6.scope: Deactivated successfully. May 16 00:42:55.026197 systemd-logind[1310]: Removed session 6. May 16 00:42:55.028656 kernel: audit: type=1106 audit(1747356175.022:162): pid=1450 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:42:55.028747 kernel: audit: type=1104 audit(1747356175.022:163): pid=1450 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:42:55.022000 audit[1450]: CRED_DISP pid=1450 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:42:55.030912 kernel: audit: type=1130 audit(1747356175.022:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.81:22-10.0.0.1:50030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:55.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.81:22-10.0.0.1:50030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:55.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.81:22-10.0.0.1:50014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:55.065000 audit[1483]: USER_ACCT pid=1483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:42:55.066220 sshd[1483]: Accepted publickey for core from 10.0.0.1 port 50030 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:42:55.065000 audit[1483]: CRED_ACQ pid=1483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:42:55.066000 audit[1483]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc1afe0c0 a2=3 a3=1 items=0 ppid=1 pid=1483 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.066000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 16 00:42:55.067273 sshd[1483]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:42:55.070591 systemd-logind[1310]: New session 7 of user core. May 16 00:42:55.071379 systemd[1]: Started session-7.scope. May 16 00:42:55.073000 audit[1483]: USER_START pid=1483 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:42:55.074000 audit[1488]: CRED_ACQ pid=1488 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:42:55.121000 audit[1489]: USER_ACCT pid=1489 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 16 00:42:55.122174 sudo[1489]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:42:55.121000 audit[1489]: CRED_REFR pid=1489 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 16 00:42:55.122423 sudo[1489]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 16 00:42:55.123000 audit[1489]: USER_START pid=1489 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 16 00:42:55.183680 systemd[1]: Starting docker.service... May 16 00:42:55.273169 env[1501]: time="2025-05-16T00:42:55.272065874Z" level=info msg="Starting up" May 16 00:42:55.274155 env[1501]: time="2025-05-16T00:42:55.274130050Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 16 00:42:55.274155 env[1501]: time="2025-05-16T00:42:55.274151565Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 16 00:42:55.274220 env[1501]: time="2025-05-16T00:42:55.274170987Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 16 00:42:55.274220 env[1501]: time="2025-05-16T00:42:55.274181014Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 16 00:42:55.276378 env[1501]: time="2025-05-16T00:42:55.276345420Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 16 00:42:55.276378 env[1501]: time="2025-05-16T00:42:55.276369304Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 16 00:42:55.276458 env[1501]: time="2025-05-16T00:42:55.276383673Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 16 00:42:55.276458 env[1501]: time="2025-05-16T00:42:55.276395358Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 16 00:42:55.471037 env[1501]: time="2025-05-16T00:42:55.470985264Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 16 00:42:55.471037 env[1501]: time="2025-05-16T00:42:55.471013687Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 16 00:42:55.471362 env[1501]: time="2025-05-16T00:42:55.471170566Z" level=info msg="Loading containers: start." May 16 00:42:55.539000 audit[1535]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.539000 audit[1535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff7792490 a2=0 a3=1 items=0 ppid=1501 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.539000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 16 00:42:55.541000 audit[1537]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.541000 audit[1537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc1a44760 a2=0 a3=1 items=0 ppid=1501 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.541000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 May 16 00:42:55.543000 audit[1539]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.543000 audit[1539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff4445470 a2=0 a3=1 items=0 ppid=1501 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.543000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 16 00:42:55.546000 audit[1541]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.546000 audit[1541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe4abbe40 a2=0 a3=1 items=0 ppid=1501 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.546000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 16 00:42:55.549000 audit[1543]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.549000 audit[1543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffec5a4470 a2=0 a3=1 items=0 ppid=1501 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.549000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E May 16 00:42:55.582000 audit[1548]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.582000 audit[1548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc73cead0 a2=0 a3=1 items=0 ppid=1501 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.582000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E May 16 00:42:55.588000 audit[1550]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.588000 audit[1550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe43193f0 a2=0 a3=1 items=0 ppid=1501 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.588000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 May 16 00:42:55.591000 audit[1552]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.591000 audit[1552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffceb40590 a2=0 a3=1 items=0 ppid=1501 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.591000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E May 16 00:42:55.593000 audit[1554]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.593000 audit[1554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=fffff740fa80 a2=0 a3=1 items=0 ppid=1501 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.593000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 16 00:42:55.600000 audit[1558]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1558 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.600000 audit[1558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe188eb60 a2=0 a3=1 items=0 ppid=1501 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.600000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 16 00:42:55.614000 audit[1559]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.614000 audit[1559]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffd1909e0 a2=0 a3=1 items=0 ppid=1501 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.614000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 16 00:42:55.624985 kernel: Initializing XFRM netlink socket May 16 00:42:55.650376 env[1501]: time="2025-05-16T00:42:55.650326544Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 16 00:42:55.664000 audit[1567]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.664000 audit[1567]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=fffff3a3a690 a2=0 a3=1 items=0 ppid=1501 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.664000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 May 16 00:42:55.684000 audit[1570]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.684000 audit[1570]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd3d94c30 a2=0 a3=1 items=0 ppid=1501 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.684000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E May 16 00:42:55.687000 audit[1573]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.687000 audit[1573]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffffb444f80 a2=0 a3=1 items=0 ppid=1501 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.687000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 May 16 00:42:55.688000 audit[1575]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.688000 audit[1575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe9d17650 a2=0 a3=1 items=0 ppid=1501 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.688000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 May 16 00:42:55.690000 audit[1577]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.690000 audit[1577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffdbc13720 a2=0 a3=1 items=0 ppid=1501 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.690000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 May 16 00:42:55.692000 audit[1579]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.692000 audit[1579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=fffff78ebb70 a2=0 a3=1 items=0 ppid=1501 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.692000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 May 16 00:42:55.693000 audit[1581]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.693000 audit[1581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffc18e7f10 a2=0 a3=1 items=0 ppid=1501 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.693000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 May 16 00:42:55.700000 audit[1584]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.700000 audit[1584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffea5a1eb0 a2=0 a3=1 items=0 ppid=1501 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.700000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 May 16 00:42:55.702000 audit[1586]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.702000 audit[1586]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffdf61a620 a2=0 a3=1 items=0 ppid=1501 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.702000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 16 00:42:55.704000 audit[1588]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1588 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.704000 audit[1588]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=fffff5cc75c0 a2=0 a3=1 items=0 ppid=1501 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.704000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 16 00:42:55.705000 audit[1590]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.705000 audit[1590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe490fb70 a2=0 a3=1 items=0 ppid=1501 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.705000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 May 16 00:42:55.707317 systemd-networkd[1102]: docker0: Link UP May 16 00:42:55.714000 audit[1594]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.714000 audit[1594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe4325090 a2=0 a3=1 items=0 ppid=1501 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.714000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 16 00:42:55.730000 audit[1595]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1595 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:42:55.730000 audit[1595]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffce667540 a2=0 a3=1 items=0 ppid=1501 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:42:55.730000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 16 00:42:55.731382 env[1501]: time="2025-05-16T00:42:55.731333006Z" level=info msg="Loading containers: done." May 16 00:42:55.751534 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3567531038-merged.mount: Deactivated successfully. May 16 00:42:55.759009 env[1501]: time="2025-05-16T00:42:55.758931791Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 00:42:55.759156 env[1501]: time="2025-05-16T00:42:55.759138134Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 16 00:42:55.759256 env[1501]: time="2025-05-16T00:42:55.759227666Z" level=info msg="Daemon has completed initialization" May 16 00:42:55.778372 systemd[1]: Started docker.service. May 16 00:42:55.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:42:55.782641 env[1501]: time="2025-05-16T00:42:55.782533720Z" level=info msg="API listen on /run/docker.sock" May 16 00:42:56.392857 env[1326]: time="2025-05-16T00:42:56.392624041Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 16 00:42:56.995545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1899846966.mount: Deactivated successfully. May 16 00:42:58.131326 env[1326]: time="2025-05-16T00:42:58.131269560Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:42:58.133254 env[1326]: time="2025-05-16T00:42:58.133224025Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:42:58.135008 env[1326]: time="2025-05-16T00:42:58.134980602Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:42:58.136582 env[1326]: time="2025-05-16T00:42:58.136551422Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:42:58.138152 env[1326]: time="2025-05-16T00:42:58.138113877Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 16 00:42:58.139454 env[1326]: time="2025-05-16T00:42:58.139421268Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 16 00:42:59.481103 env[1326]: time="2025-05-16T00:42:59.481020689Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:42:59.483201 env[1326]: time="2025-05-16T00:42:59.483162246Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:42:59.485752 env[1326]: time="2025-05-16T00:42:59.485726018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:42:59.487845 env[1326]: time="2025-05-16T00:42:59.487819784Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:42:59.488764 env[1326]: time="2025-05-16T00:42:59.488722324Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 16 00:42:59.490172 env[1326]: time="2025-05-16T00:42:59.490120431Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 16 00:43:00.862385 env[1326]: time="2025-05-16T00:43:00.862337237Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:00.863819 env[1326]: time="2025-05-16T00:43:00.863789300Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:00.865819 env[1326]: time="2025-05-16T00:43:00.865781551Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:00.868403 env[1326]: time="2025-05-16T00:43:00.868367031Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:00.869066 env[1326]: time="2025-05-16T00:43:00.869036902Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 16 00:43:00.869669 env[1326]: time="2025-05-16T00:43:00.869641455Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 16 00:43:01.931149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1090702911.mount: Deactivated successfully. May 16 00:43:02.505621 env[1326]: time="2025-05-16T00:43:02.505566619Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:02.507555 env[1326]: time="2025-05-16T00:43:02.507508941Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:02.509016 env[1326]: time="2025-05-16T00:43:02.508987023Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:02.510584 env[1326]: time="2025-05-16T00:43:02.510556432Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:02.510872 env[1326]: time="2025-05-16T00:43:02.510847050Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 16 00:43:02.511344 env[1326]: time="2025-05-16T00:43:02.511321399Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 00:43:03.153148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2714943789.mount: Deactivated successfully. May 16 00:43:03.980428 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 00:43:03.980613 systemd[1]: Stopped kubelet.service. May 16 00:43:03.985384 kernel: kauditd_printk_skb: 84 callbacks suppressed May 16 00:43:03.985431 kernel: audit: type=1130 audit(1747356183.979:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:03.985455 kernel: audit: type=1131 audit(1747356183.979:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:03.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:03.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:03.982086 systemd[1]: Starting kubelet.service... May 16 00:43:04.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:04.087516 systemd[1]: Started kubelet.service. May 16 00:43:04.090996 kernel: audit: type=1130 audit(1747356184.086:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:04.097911 env[1326]: time="2025-05-16T00:43:04.097864379Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:04.100411 env[1326]: time="2025-05-16T00:43:04.100377089Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:04.103448 env[1326]: time="2025-05-16T00:43:04.103411736Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:04.106638 env[1326]: time="2025-05-16T00:43:04.106594319Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:04.107526 env[1326]: time="2025-05-16T00:43:04.107498227Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 16 00:43:04.108174 env[1326]: time="2025-05-16T00:43:04.108144034Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 00:43:04.132443 kubelet[1641]: E0516 00:43:04.132396 1641 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:43:04.134591 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:43:04.134737 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:43:04.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 16 00:43:04.137982 kernel: audit: type=1131 audit(1747356184.134:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 16 00:43:04.575948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3592558120.mount: Deactivated successfully. May 16 00:43:04.579325 env[1326]: time="2025-05-16T00:43:04.579276896Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:04.580869 env[1326]: time="2025-05-16T00:43:04.580832229Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:04.582283 env[1326]: time="2025-05-16T00:43:04.582255923Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:04.583601 env[1326]: time="2025-05-16T00:43:04.583565548Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:04.584154 env[1326]: time="2025-05-16T00:43:04.584121192Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 16 00:43:04.584670 env[1326]: time="2025-05-16T00:43:04.584644045Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 16 00:43:05.076643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount174958644.mount: Deactivated successfully. May 16 00:43:07.299980 env[1326]: time="2025-05-16T00:43:07.299916180Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:07.302368 env[1326]: time="2025-05-16T00:43:07.302328841Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:07.304484 env[1326]: time="2025-05-16T00:43:07.304445325Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:07.307349 env[1326]: time="2025-05-16T00:43:07.307313182Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:07.308231 env[1326]: time="2025-05-16T00:43:07.308191460Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 16 00:43:13.028593 systemd[1]: Stopped kubelet.service. May 16 00:43:13.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:13.031011 kernel: audit: type=1130 audit(1747356193.027:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:13.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:13.032437 systemd[1]: Starting kubelet.service... May 16 00:43:13.033985 kernel: audit: type=1131 audit(1747356193.030:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:13.056531 systemd[1]: Reloading. May 16 00:43:13.114512 /usr/lib/systemd/system-generators/torcx-generator[1700]: time="2025-05-16T00:43:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:43:13.114868 /usr/lib/systemd/system-generators/torcx-generator[1700]: time="2025-05-16T00:43:13Z" level=info msg="torcx already run" May 16 00:43:13.213440 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:43:13.213460 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:43:13.230216 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:43:13.287130 systemd[1]: Started kubelet.service. May 16 00:43:13.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:13.289995 kernel: audit: type=1130 audit(1747356193.286:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:13.290631 systemd[1]: Stopping kubelet.service... May 16 00:43:13.291200 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:43:13.291433 systemd[1]: Stopped kubelet.service. May 16 00:43:13.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:13.292914 systemd[1]: Starting kubelet.service... May 16 00:43:13.293983 kernel: audit: type=1131 audit(1747356193.290:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:13.381370 systemd[1]: Started kubelet.service. May 16 00:43:13.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:13.384976 kernel: audit: type=1130 audit(1747356193.381:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:13.424076 kubelet[1762]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:43:13.424076 kubelet[1762]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 00:43:13.424076 kubelet[1762]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:43:13.424472 kubelet[1762]: I0516 00:43:13.424122 1762 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:43:14.296614 kubelet[1762]: I0516 00:43:14.296563 1762 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 00:43:14.296614 kubelet[1762]: I0516 00:43:14.296603 1762 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:43:14.296868 kubelet[1762]: I0516 00:43:14.296838 1762 server.go:934] "Client rotation is on, will bootstrap in background" May 16 00:43:14.327217 kubelet[1762]: E0516 00:43:14.327165 1762 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:43:14.329387 kubelet[1762]: I0516 00:43:14.329357 1762 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:43:14.336310 kubelet[1762]: E0516 00:43:14.336279 1762 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:43:14.336452 kubelet[1762]: I0516 00:43:14.336438 1762 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:43:14.340333 kubelet[1762]: I0516 00:43:14.340289 1762 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:43:14.341488 kubelet[1762]: I0516 00:43:14.341452 1762 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 00:43:14.341639 kubelet[1762]: I0516 00:43:14.341603 1762 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:43:14.341804 kubelet[1762]: I0516 00:43:14.341633 1762 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 16 00:43:14.341909 kubelet[1762]: I0516 00:43:14.341869 1762 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:43:14.341909 kubelet[1762]: I0516 00:43:14.341878 1762 container_manager_linux.go:300] "Creating device plugin manager" May 16 00:43:14.342136 kubelet[1762]: I0516 00:43:14.342115 1762 state_mem.go:36] "Initialized new in-memory state store" May 16 00:43:14.350195 kubelet[1762]: I0516 00:43:14.350170 1762 kubelet.go:408] "Attempting to sync node with API server" May 16 00:43:14.350270 kubelet[1762]: I0516 00:43:14.350203 1762 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:43:14.350270 kubelet[1762]: I0516 00:43:14.350228 1762 kubelet.go:314] "Adding apiserver pod source" May 16 00:43:14.350392 kubelet[1762]: I0516 00:43:14.350375 1762 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:43:14.363011 kubelet[1762]: W0516 00:43:14.362933 1762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 16 00:43:14.363108 kubelet[1762]: W0516 00:43:14.363055 1762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 16 00:43:14.363154 kubelet[1762]: E0516 00:43:14.363118 1762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:43:14.363218 kubelet[1762]: E0516 00:43:14.363196 1762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:43:14.364653 kubelet[1762]: I0516 00:43:14.364628 1762 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 16 00:43:14.365329 kubelet[1762]: I0516 00:43:14.365314 1762 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:43:14.365489 kubelet[1762]: W0516 00:43:14.365479 1762 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:43:14.366398 kubelet[1762]: I0516 00:43:14.366384 1762 server.go:1274] "Started kubelet" May 16 00:43:14.366643 kubelet[1762]: I0516 00:43:14.366588 1762 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:43:14.366753 kubelet[1762]: I0516 00:43:14.366724 1762 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:43:14.366954 kubelet[1762]: I0516 00:43:14.366933 1762 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:43:14.368208 kubelet[1762]: I0516 00:43:14.368012 1762 server.go:449] "Adding debug handlers to kubelet server" May 16 00:43:14.376000 audit[1762]: AVC avc: denied { mac_admin } for pid=1762 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:14.378057 kubelet[1762]: I0516 00:43:14.377940 1762 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 16 00:43:14.378057 kubelet[1762]: I0516 00:43:14.377998 1762 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 16 00:43:14.378117 kubelet[1762]: I0516 00:43:14.378062 1762 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:43:14.376000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 16 00:43:14.381064 kernel: audit: type=1400 audit(1747356194.376:208): avc: denied { mac_admin } for pid=1762 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:14.381131 kernel: audit: type=1401 audit(1747356194.376:208): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 16 00:43:14.381161 kernel: audit: type=1300 audit(1747356194.376:208): arch=c00000b7 syscall=5 success=no exit=-22 a0=400042bd70 a1=4000a70ae0 a2=400042bd40 a3=25 items=0 ppid=1 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:14.376000 audit[1762]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400042bd70 a1=4000a70ae0 a2=400042bd40 a3=25 items=0 ppid=1 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:14.383240 kubelet[1762]: I0516 00:43:14.383213 1762 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:43:14.376000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 16 00:43:14.385038 kubelet[1762]: I0516 00:43:14.385016 1762 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 00:43:14.385437 kubelet[1762]: E0516 00:43:14.385418 1762 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:43:14.385548 kubelet[1762]: I0516 00:43:14.385536 1762 reconciler.go:26] "Reconciler: start to sync state" May 16 00:43:14.386640 kernel: audit: type=1327 audit(1747356194.376:208): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 16 00:43:14.386683 kernel: audit: type=1400 audit(1747356194.377:209): avc: denied { mac_admin } for pid=1762 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:14.377000 audit[1762]: AVC avc: denied { mac_admin } for pid=1762 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:14.387668 kubelet[1762]: I0516 00:43:14.387474 1762 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 00:43:14.377000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 16 00:43:14.377000 audit[1762]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000433ba0 a1=4000a70af8 a2=400042be00 a3=25 items=0 ppid=1 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:14.377000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 16 00:43:14.384000 audit[1776]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1776 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:14.384000 audit[1776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd33a8c80 a2=0 a3=1 items=0 ppid=1762 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:14.384000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 16 00:43:14.386000 audit[1777]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1777 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:14.386000 audit[1777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea9c4f30 a2=0 a3=1 items=0 ppid=1762 pid=1777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:14.386000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 16 00:43:14.389491 kubelet[1762]: W0516 00:43:14.389400 1762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 16 00:43:14.389491 kubelet[1762]: E0516 00:43:14.389458 1762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:43:14.389587 kubelet[1762]: E0516 00:43:14.389519 1762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="200ms" May 16 00:43:14.389787 kubelet[1762]: E0516 00:43:14.388672 1762 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fdb336d4922c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:43:14.366366401 +0000 UTC m=+0.979163955,LastTimestamp:2025-05-16 00:43:14.366366401 +0000 UTC m=+0.979163955,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:43:14.389898 kubelet[1762]: I0516 00:43:14.389881 1762 factory.go:221] Registration of the systemd container factory successfully May 16 00:43:14.390028 kubelet[1762]: I0516 00:43:14.390006 1762 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:43:14.390693 kubelet[1762]: E0516 00:43:14.390668 1762 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:43:14.392069 kubelet[1762]: I0516 00:43:14.392044 1762 factory.go:221] Registration of the containerd container factory successfully May 16 00:43:14.391000 audit[1779]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1779 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:14.391000 audit[1779]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc2ea45d0 a2=0 a3=1 items=0 ppid=1762 pid=1779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:14.391000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 16 00:43:14.396000 audit[1781]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1781 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:14.396000 audit[1781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdc2ab840 a2=0 a3=1 items=0 ppid=1762 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:14.396000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 16 00:43:14.405000 audit[1787]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1787 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:14.405000 audit[1787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffd2497650 a2=0 a3=1 items=0 ppid=1762 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:14.405000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 May 16 00:43:14.406890 kubelet[1762]: I0516 00:43:14.406848 1762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:43:14.406000 audit[1789]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1789 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:14.406000 audit[1789]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffd11c750 a2=0 a3=1 items=0 ppid=1762 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:14.406000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 16 00:43:14.407845 kubelet[1762]: I0516 00:43:14.407826 1762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:43:14.407928 kubelet[1762]: I0516 00:43:14.407918 1762 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 00:43:14.408006 kubelet[1762]: I0516 00:43:14.407994 1762 kubelet.go:2321] "Starting kubelet main sync loop" May 16 00:43:14.408144 kubelet[1762]: I0516 00:43:14.408126 1762 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 00:43:14.408144 kubelet[1762]: I0516 00:43:14.408142 1762 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 00:43:14.408204 kubelet[1762]: I0516 00:43:14.408160 1762 state_mem.go:36] "Initialized new in-memory state store" May 16 00:43:14.408000 audit[1791]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1791 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:14.408000 audit[1791]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcfc7e160 a2=0 a3=1 items=0 ppid=1762 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:14.408000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 16 00:43:14.409353 kubelet[1762]: W0516 00:43:14.409272 1762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 16 00:43:14.409353 kubelet[1762]: E0516 00:43:14.409302 1762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:43:14.409353 kubelet[1762]: E0516 00:43:14.408105 1762 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:43:14.408000 audit[1790]: NETFILTER_CFG table=mangle:33 family=2 entries=1 op=nft_register_chain pid=1790 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:14.408000 audit[1790]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdce571d0 a2=0 a3=1 items=0 ppid=1762 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:14.408000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 16 00:43:14.409000 audit[1792]: NETFILTER_CFG table=nat:34 family=10 entries=2 op=nft_register_chain pid=1792 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:14.409000 audit[1792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=fffffe9d1b70 a2=0 a3=1 items=0 ppid=1762 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:14.409000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 16 00:43:14.409000 audit[1793]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1793 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:14.409000 audit[1793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb87b5a0 a2=0 a3=1 items=0 ppid=1762 pid=1793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:14.409000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 16 00:43:14.410000 audit[1794]: NETFILTER_CFG table=filter:36 family=10 entries=2 op=nft_register_chain pid=1794 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:14.410000 audit[1794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff30d3300 a2=0 a3=1 items=0 ppid=1762 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:14.410000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 16 00:43:14.410000 audit[1795]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=1795 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:14.410000 audit[1795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc2a35480 a2=0 a3=1 items=0 ppid=1762 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:14.410000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 16 00:43:14.480893 kubelet[1762]: I0516 00:43:14.480867 1762 policy_none.go:49] "None policy: Start" May 16 00:43:14.481633 kubelet[1762]: I0516 00:43:14.481612 1762 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 00:43:14.481691 kubelet[1762]: I0516 00:43:14.481637 1762 state_mem.go:35] "Initializing new in-memory state store" May 16 00:43:14.485495 kubelet[1762]: E0516 00:43:14.485466 1762 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:43:14.486264 kubelet[1762]: I0516 00:43:14.486246 1762 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:43:14.485000 audit[1762]: AVC avc: denied { mac_admin } for pid=1762 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:14.485000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 16 00:43:14.485000 audit[1762]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bd8e70 a1=4000977dd0 a2=4000bd8e40 a3=25 items=0 ppid=1 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:14.485000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 16 00:43:14.486452 kubelet[1762]: I0516 00:43:14.486302 1762 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 16 00:43:14.486452 kubelet[1762]: I0516 00:43:14.486403 1762 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:43:14.486452 kubelet[1762]: I0516 00:43:14.486414 1762 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:43:14.486768 kubelet[1762]: I0516 00:43:14.486751 1762 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:43:14.487997 kubelet[1762]: E0516 00:43:14.487972 1762 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 00:43:14.589655 kubelet[1762]: I0516 00:43:14.588177 1762 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:43:14.589882 kubelet[1762]: E0516 00:43:14.589767 1762 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" May 16 00:43:14.591171 kubelet[1762]: E0516 00:43:14.591142 1762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="400ms" May 16 00:43:14.686534 kubelet[1762]: I0516 00:43:14.686474 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:14.686672 kubelet[1762]: I0516 00:43:14.686548 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:14.686672 kubelet[1762]: I0516 00:43:14.686570 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:14.686672 kubelet[1762]: I0516 00:43:14.686588 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:14.686672 kubelet[1762]: I0516 00:43:14.686605 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 00:43:14.686672 kubelet[1762]: I0516 00:43:14.686620 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd993ee7f0bc9ebaa831ff9915e3cfbf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dd993ee7f0bc9ebaa831ff9915e3cfbf\") " pod="kube-system/kube-apiserver-localhost" May 16 00:43:14.686785 kubelet[1762]: I0516 00:43:14.686634 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd993ee7f0bc9ebaa831ff9915e3cfbf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dd993ee7f0bc9ebaa831ff9915e3cfbf\") " pod="kube-system/kube-apiserver-localhost" May 16 00:43:14.686785 kubelet[1762]: I0516 00:43:14.686648 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd993ee7f0bc9ebaa831ff9915e3cfbf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dd993ee7f0bc9ebaa831ff9915e3cfbf\") " pod="kube-system/kube-apiserver-localhost" May 16 00:43:14.686785 kubelet[1762]: I0516 00:43:14.686662 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:14.792556 kubelet[1762]: I0516 00:43:14.792527 1762 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:43:14.793024 kubelet[1762]: E0516 00:43:14.792999 1762 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" May 16 00:43:14.816810 kubelet[1762]: E0516 00:43:14.816781 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:14.817471 env[1326]: time="2025-05-16T00:43:14.817425134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dd993ee7f0bc9ebaa831ff9915e3cfbf,Namespace:kube-system,Attempt:0,}" May 16 00:43:14.818591 kubelet[1762]: E0516 00:43:14.818569 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:14.819067 env[1326]: time="2025-05-16T00:43:14.819030789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 16 00:43:14.819310 kubelet[1762]: E0516 00:43:14.819291 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:14.819672 env[1326]: time="2025-05-16T00:43:14.819563197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 16 00:43:14.992652 kubelet[1762]: E0516 00:43:14.992544 1762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="800ms" May 16 00:43:15.194431 kubelet[1762]: I0516 00:43:15.194397 1762 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:43:15.194817 kubelet[1762]: E0516 00:43:15.194783 1762 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" May 16 00:43:15.344134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount855013592.mount: Deactivated successfully. May 16 00:43:15.347928 env[1326]: time="2025-05-16T00:43:15.347851060Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:15.349210 env[1326]: time="2025-05-16T00:43:15.349184731Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:15.351988 env[1326]: time="2025-05-16T00:43:15.351644740Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:15.353281 env[1326]: time="2025-05-16T00:43:15.353255039Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:15.354009 env[1326]: time="2025-05-16T00:43:15.353985497Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:15.354902 env[1326]: time="2025-05-16T00:43:15.354873731Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:15.357303 env[1326]: time="2025-05-16T00:43:15.357275953Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:15.358053 env[1326]: time="2025-05-16T00:43:15.358031148Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:15.360794 env[1326]: time="2025-05-16T00:43:15.360764668Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:15.361084 kubelet[1762]: W0516 00:43:15.361029 1762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 16 00:43:15.361150 kubelet[1762]: E0516 00:43:15.361096 1762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:43:15.362629 env[1326]: time="2025-05-16T00:43:15.362604640Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:15.363852 env[1326]: time="2025-05-16T00:43:15.363818139Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:15.364723 env[1326]: time="2025-05-16T00:43:15.364700179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:15.398276 env[1326]: time="2025-05-16T00:43:15.397925326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:43:15.398276 env[1326]: time="2025-05-16T00:43:15.397983833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:43:15.398276 env[1326]: time="2025-05-16T00:43:15.397994863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:43:15.399458 env[1326]: time="2025-05-16T00:43:15.399211879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:43:15.399458 env[1326]: time="2025-05-16T00:43:15.399239734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:43:15.399458 env[1326]: time="2025-05-16T00:43:15.399250084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:43:15.399632 env[1326]: time="2025-05-16T00:43:15.398308938Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b611331f709ed103ce603b319951bcedd4a8ef34df6e4c919ac31e335b7ab8ed pid=1817 runtime=io.containerd.runc.v2 May 16 00:43:15.399926 env[1326]: time="2025-05-16T00:43:15.399886987Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c45075d35bf1baad9f9e0fe5a4cd1424f1c5fb80b6742d22dc06c667106971a2 pid=1818 runtime=io.containerd.runc.v2 May 16 00:43:15.401278 env[1326]: time="2025-05-16T00:43:15.401065398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:43:15.401278 env[1326]: time="2025-05-16T00:43:15.401104802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:43:15.401278 env[1326]: time="2025-05-16T00:43:15.401115033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:43:15.401505 env[1326]: time="2025-05-16T00:43:15.401435622Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be86a3b2c99c65607e96c17193ce09ae48c749daaf2394396f998e1d7dd2f62d pid=1816 runtime=io.containerd.runc.v2 May 16 00:43:15.472824 env[1326]: time="2025-05-16T00:43:15.472774921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c45075d35bf1baad9f9e0fe5a4cd1424f1c5fb80b6742d22dc06c667106971a2\"" May 16 00:43:15.475289 kubelet[1762]: E0516 00:43:15.475243 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:15.475585 kubelet[1762]: W0516 00:43:15.475529 1762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 16 00:43:15.475635 kubelet[1762]: E0516 00:43:15.475591 1762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:43:15.477129 env[1326]: time="2025-05-16T00:43:15.477081016Z" level=info msg="CreateContainer within sandbox \"c45075d35bf1baad9f9e0fe5a4cd1424f1c5fb80b6742d22dc06c667106971a2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 00:43:15.480315 env[1326]: time="2025-05-16T00:43:15.480279675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dd993ee7f0bc9ebaa831ff9915e3cfbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"be86a3b2c99c65607e96c17193ce09ae48c749daaf2394396f998e1d7dd2f62d\"" May 16 00:43:15.480664 env[1326]: time="2025-05-16T00:43:15.480641387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"b611331f709ed103ce603b319951bcedd4a8ef34df6e4c919ac31e335b7ab8ed\"" May 16 00:43:15.481701 kubelet[1762]: E0516 00:43:15.481673 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:15.482031 kubelet[1762]: E0516 00:43:15.482007 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:15.483264 env[1326]: time="2025-05-16T00:43:15.483234115Z" level=info msg="CreateContainer within sandbox \"b611331f709ed103ce603b319951bcedd4a8ef34df6e4c919ac31e335b7ab8ed\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 00:43:15.484079 env[1326]: time="2025-05-16T00:43:15.484051294Z" level=info msg="CreateContainer within sandbox \"be86a3b2c99c65607e96c17193ce09ae48c749daaf2394396f998e1d7dd2f62d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 00:43:15.535187 env[1326]: time="2025-05-16T00:43:15.535137601Z" level=info msg="CreateContainer within sandbox \"c45075d35bf1baad9f9e0fe5a4cd1424f1c5fb80b6742d22dc06c667106971a2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3a3b687a8d872565132bd4485e9d58227877de7cc7614af237e352001a24e380\"" May 16 00:43:15.535683 env[1326]: time="2025-05-16T00:43:15.535633432Z" level=info msg="CreateContainer within sandbox \"b611331f709ed103ce603b319951bcedd4a8ef34df6e4c919ac31e335b7ab8ed\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6b6e8917fabe7c1eb212913d8a7aac4ffd2c80b3b78198dc2e09be516393fd46\"" May 16 00:43:15.536208 env[1326]: time="2025-05-16T00:43:15.536174461Z" level=info msg="StartContainer for \"3a3b687a8d872565132bd4485e9d58227877de7cc7614af237e352001a24e380\"" May 16 00:43:15.536390 env[1326]: time="2025-05-16T00:43:15.536186930Z" level=info msg="StartContainer for \"6b6e8917fabe7c1eb212913d8a7aac4ffd2c80b3b78198dc2e09be516393fd46\"" May 16 00:43:15.539585 env[1326]: time="2025-05-16T00:43:15.539540488Z" level=info msg="CreateContainer within sandbox \"be86a3b2c99c65607e96c17193ce09ae48c749daaf2394396f998e1d7dd2f62d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6346e1b5292a29d19fac9daa6b659a2546dd44f46c5476ac90eeba206576a361\"" May 16 00:43:15.540070 env[1326]: time="2025-05-16T00:43:15.540042193Z" level=info msg="StartContainer for \"6346e1b5292a29d19fac9daa6b659a2546dd44f46c5476ac90eeba206576a361\"" May 16 00:43:15.621228 env[1326]: time="2025-05-16T00:43:15.621112986Z" level=info msg="StartContainer for \"6b6e8917fabe7c1eb212913d8a7aac4ffd2c80b3b78198dc2e09be516393fd46\" returns successfully" May 16 00:43:15.662556 env[1326]: time="2025-05-16T00:43:15.662502568Z" level=info msg="StartContainer for \"6346e1b5292a29d19fac9daa6b659a2546dd44f46c5476ac90eeba206576a361\" returns successfully" May 16 00:43:15.675812 env[1326]: time="2025-05-16T00:43:15.675756348Z" level=info msg="StartContainer for \"3a3b687a8d872565132bd4485e9d58227877de7cc7614af237e352001a24e380\" returns successfully" May 16 00:43:15.709311 kubelet[1762]: W0516 00:43:15.709166 1762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 16 00:43:15.709311 kubelet[1762]: E0516 00:43:15.709249 1762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:43:15.793215 kubelet[1762]: E0516 00:43:15.793143 1762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="1.6s" May 16 00:43:15.996832 kubelet[1762]: I0516 00:43:15.996729 1762 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:43:16.420400 kubelet[1762]: E0516 00:43:16.420300 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:16.422302 kubelet[1762]: E0516 00:43:16.422244 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:16.423824 kubelet[1762]: E0516 00:43:16.423752 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:17.426148 kubelet[1762]: E0516 00:43:17.426118 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:17.426644 kubelet[1762]: E0516 00:43:17.426614 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:17.436233 kubelet[1762]: E0516 00:43:17.436193 1762 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 00:43:17.549847 kubelet[1762]: E0516 00:43:17.549713 1762 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183fdb336d4922c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:43:14.366366401 +0000 UTC m=+0.979163955,LastTimestamp:2025-05-16 00:43:14.366366401 +0000 UTC m=+0.979163955,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:43:17.607368 kubelet[1762]: I0516 00:43:17.607333 1762 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 00:43:18.352641 kubelet[1762]: I0516 00:43:18.352597 1762 apiserver.go:52] "Watching apiserver" May 16 00:43:18.388055 kubelet[1762]: I0516 00:43:18.388018 1762 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 00:43:19.360653 systemd[1]: Reloading. May 16 00:43:19.407279 /usr/lib/systemd/system-generators/torcx-generator[2060]: time="2025-05-16T00:43:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:43:19.407306 /usr/lib/systemd/system-generators/torcx-generator[2060]: time="2025-05-16T00:43:19Z" level=info msg="torcx already run" May 16 00:43:19.472020 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:43:19.472200 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:43:19.490547 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:43:19.563534 systemd[1]: Stopping kubelet.service... May 16 00:43:19.585433 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:43:19.585731 systemd[1]: Stopped kubelet.service. May 16 00:43:19.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.586315 kernel: kauditd_printk_skb: 43 callbacks suppressed May 16 00:43:19.586362 kernel: audit: type=1131 audit(1747356199.584:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.587584 systemd[1]: Starting kubelet.service... May 16 00:43:19.688083 systemd[1]: Started kubelet.service. May 16 00:43:19.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.693014 kernel: audit: type=1130 audit(1747356199.688:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.728649 kubelet[2112]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:43:19.728649 kubelet[2112]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 00:43:19.728649 kubelet[2112]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:43:19.729057 kubelet[2112]: I0516 00:43:19.728684 2112 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:43:19.734344 kubelet[2112]: I0516 00:43:19.734307 2112 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 00:43:19.734484 kubelet[2112]: I0516 00:43:19.734473 2112 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:43:19.734749 kubelet[2112]: I0516 00:43:19.734731 2112 server.go:934] "Client rotation is on, will bootstrap in background" May 16 00:43:19.736114 kubelet[2112]: I0516 00:43:19.736091 2112 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 00:43:19.739056 kubelet[2112]: I0516 00:43:19.738361 2112 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:43:19.745020 kubelet[2112]: E0516 00:43:19.744946 2112 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:43:19.745020 kubelet[2112]: I0516 00:43:19.745020 2112 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:43:19.749248 kubelet[2112]: I0516 00:43:19.749223 2112 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:43:19.749746 kubelet[2112]: I0516 00:43:19.749727 2112 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 00:43:19.749986 kubelet[2112]: I0516 00:43:19.749935 2112 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:43:19.750222 kubelet[2112]: I0516 00:43:19.750046 2112 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 16 00:43:19.750352 kubelet[2112]: I0516 00:43:19.750337 2112 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:43:19.750409 kubelet[2112]: I0516 00:43:19.750400 2112 container_manager_linux.go:300] "Creating device plugin manager" May 16 00:43:19.750498 kubelet[2112]: I0516 00:43:19.750488 2112 state_mem.go:36] "Initialized new in-memory state store" May 16 00:43:19.750658 kubelet[2112]: I0516 00:43:19.750645 2112 kubelet.go:408] "Attempting to sync node with API server" May 16 00:43:19.750728 kubelet[2112]: I0516 00:43:19.750716 2112 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:43:19.750809 kubelet[2112]: I0516 00:43:19.750797 2112 kubelet.go:314] "Adding apiserver pod source" May 16 00:43:19.750880 kubelet[2112]: I0516 00:43:19.750869 2112 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:43:19.751938 kubelet[2112]: I0516 00:43:19.751919 2112 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 16 00:43:19.752758 kubelet[2112]: I0516 00:43:19.752742 2112 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:43:19.754072 kubelet[2112]: I0516 00:43:19.754046 2112 server.go:1274] "Started kubelet" May 16 00:43:19.755078 kubelet[2112]: I0516 00:43:19.755022 2112 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:43:19.755608 kubelet[2112]: I0516 00:43:19.755580 2112 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:43:19.755717 kubelet[2112]: I0516 00:43:19.755690 2112 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:43:19.754000 audit[2112]: AVC avc: denied { mac_admin } for pid=2112 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:19.756030 kubelet[2112]: I0516 00:43:19.756006 2112 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 16 00:43:19.756113 kubelet[2112]: I0516 00:43:19.756098 2112 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 16 00:43:19.756182 kubelet[2112]: I0516 00:43:19.756172 2112 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:43:19.757665 kubelet[2112]: E0516 00:43:19.757623 2112 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:43:19.754000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 16 00:43:19.757953 kubelet[2112]: I0516 00:43:19.757921 2112 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:43:19.758691 kernel: audit: type=1400 audit(1747356199.754:225): avc: denied { mac_admin } for pid=2112 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:19.758758 kernel: audit: type=1401 audit(1747356199.754:225): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 16 00:43:19.761745 kubelet[2112]: I0516 00:43:19.761724 2112 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 00:43:19.762467 kubelet[2112]: E0516 00:43:19.762418 2112 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:43:19.754000 audit[2112]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40007d9170 a1=4000794f48 a2=40007d9140 a3=25 items=0 ppid=1 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:19.763365 kubelet[2112]: I0516 00:43:19.763347 2112 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 00:43:19.763542 kubelet[2112]: I0516 00:43:19.763530 2112 reconciler.go:26] "Reconciler: start to sync state" May 16 00:43:19.765199 kubelet[2112]: I0516 00:43:19.765166 2112 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:43:19.766035 kernel: audit: type=1300 audit(1747356199.754:225): arch=c00000b7 syscall=5 success=no exit=-22 a0=40007d9170 a1=4000794f48 a2=40007d9140 a3=25 items=0 ppid=1 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:19.766249 kubelet[2112]: I0516 00:43:19.766229 2112 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:43:19.766346 kubelet[2112]: I0516 00:43:19.766334 2112 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 00:43:19.766406 kubelet[2112]: I0516 00:43:19.766397 2112 kubelet.go:2321] "Starting kubelet main sync loop" May 16 00:43:19.754000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 16 00:43:19.766577 kubelet[2112]: E0516 00:43:19.766549 2112 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:43:19.768615 kubelet[2112]: I0516 00:43:19.768584 2112 factory.go:221] Registration of the systemd container factory successfully May 16 00:43:19.768838 kubelet[2112]: I0516 00:43:19.768813 2112 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:43:19.769024 kernel: audit: type=1327 audit(1747356199.754:225): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 16 00:43:19.769081 kernel: audit: type=1400 audit(1747356199.755:226): avc: denied { mac_admin } for pid=2112 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:19.755000 audit[2112]: AVC avc: denied { mac_admin } for pid=2112 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:19.755000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 16 00:43:19.772804 kernel: audit: type=1401 audit(1747356199.755:226): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 16 00:43:19.772866 kernel: audit: type=1300 audit(1747356199.755:226): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a28980 a1=4000794f60 a2=40007d9200 a3=25 items=0 ppid=1 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:19.755000 audit[2112]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a28980 a1=4000794f60 a2=40007d9200 a3=25 items=0 ppid=1 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:19.776500 kubelet[2112]: I0516 00:43:19.776460 2112 server.go:449] "Adding debug handlers to kubelet server" May 16 00:43:19.755000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 16 00:43:19.780069 kernel: audit: type=1327 audit(1747356199.755:226): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 16 00:43:19.796248 kubelet[2112]: I0516 00:43:19.796048 2112 factory.go:221] Registration of the containerd container factory successfully May 16 00:43:19.836109 kubelet[2112]: I0516 00:43:19.836069 2112 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 00:43:19.836109 kubelet[2112]: I0516 00:43:19.836087 2112 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 00:43:19.836109 kubelet[2112]: I0516 00:43:19.836109 2112 state_mem.go:36] "Initialized new in-memory state store" May 16 00:43:19.836275 kubelet[2112]: I0516 00:43:19.836259 2112 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 00:43:19.836300 kubelet[2112]: I0516 00:43:19.836269 2112 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 00:43:19.836300 kubelet[2112]: I0516 00:43:19.836291 2112 policy_none.go:49] "None policy: Start" May 16 00:43:19.836895 kubelet[2112]: I0516 00:43:19.836860 2112 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 00:43:19.836895 kubelet[2112]: I0516 00:43:19.836889 2112 state_mem.go:35] "Initializing new in-memory state store" May 16 00:43:19.837057 kubelet[2112]: I0516 00:43:19.837041 2112 state_mem.go:75] "Updated machine memory state" May 16 00:43:19.838198 kubelet[2112]: I0516 00:43:19.838178 2112 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:43:19.837000 audit[2112]: AVC avc: denied { mac_admin } for pid=2112 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:19.837000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 16 00:43:19.837000 audit[2112]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400124ad50 a1=400124c4b0 a2=400124ad20 a3=25 items=0 ppid=1 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:19.837000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 16 00:43:19.838403 kubelet[2112]: I0516 00:43:19.838234 2112 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 16 00:43:19.838403 kubelet[2112]: I0516 00:43:19.838367 2112 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:43:19.838403 kubelet[2112]: I0516 00:43:19.838378 2112 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:43:19.838783 kubelet[2112]: I0516 00:43:19.838761 2112 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:43:19.944190 kubelet[2112]: I0516 00:43:19.942573 2112 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:43:19.964623 kubelet[2112]: I0516 00:43:19.964586 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd993ee7f0bc9ebaa831ff9915e3cfbf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dd993ee7f0bc9ebaa831ff9915e3cfbf\") " pod="kube-system/kube-apiserver-localhost" May 16 00:43:19.964799 kubelet[2112]: I0516 00:43:19.964782 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:19.964871 kubelet[2112]: I0516 00:43:19.964857 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:19.964937 kubelet[2112]: I0516 00:43:19.964925 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:19.965032 kubelet[2112]: I0516 00:43:19.965018 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:19.965132 kubelet[2112]: I0516 00:43:19.965119 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 00:43:19.965199 kubelet[2112]: I0516 00:43:19.965188 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd993ee7f0bc9ebaa831ff9915e3cfbf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dd993ee7f0bc9ebaa831ff9915e3cfbf\") " pod="kube-system/kube-apiserver-localhost" May 16 00:43:19.965265 kubelet[2112]: I0516 00:43:19.965254 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:19.965387 kubelet[2112]: I0516 00:43:19.965372 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd993ee7f0bc9ebaa831ff9915e3cfbf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dd993ee7f0bc9ebaa831ff9915e3cfbf\") " pod="kube-system/kube-apiserver-localhost" May 16 00:43:19.969019 kubelet[2112]: I0516 00:43:19.968661 2112 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 16 00:43:19.969019 kubelet[2112]: I0516 00:43:19.968769 2112 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 00:43:20.177705 kubelet[2112]: E0516 00:43:20.177666 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:20.177987 kubelet[2112]: E0516 00:43:20.177670 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:20.178050 kubelet[2112]: E0516 00:43:20.177678 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:20.752221 kubelet[2112]: I0516 00:43:20.752183 2112 apiserver.go:52] "Watching apiserver" May 16 00:43:20.764477 kubelet[2112]: I0516 00:43:20.764441 2112 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 00:43:20.821837 kubelet[2112]: E0516 00:43:20.821799 2112 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 00:43:20.822013 kubelet[2112]: E0516 00:43:20.821991 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:20.822732 kubelet[2112]: E0516 00:43:20.822708 2112 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 00:43:20.822927 kubelet[2112]: E0516 00:43:20.822910 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:20.823057 kubelet[2112]: E0516 00:43:20.823032 2112 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 16 00:43:20.823214 kubelet[2112]: E0516 00:43:20.823199 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:20.840500 kubelet[2112]: I0516 00:43:20.840428 2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.840411741 podStartE2EDuration="1.840411741s" podCreationTimestamp="2025-05-16 00:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:43:20.831920569 +0000 UTC m=+1.140542764" watchObservedRunningTime="2025-05-16 00:43:20.840411741 +0000 UTC m=+1.149033936" May 16 00:43:20.847806 kubelet[2112]: I0516 00:43:20.847738 2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8477259 podStartE2EDuration="1.8477259s" podCreationTimestamp="2025-05-16 00:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:43:20.847634982 +0000 UTC m=+1.156257177" watchObservedRunningTime="2025-05-16 00:43:20.8477259 +0000 UTC m=+1.156348095" May 16 00:43:20.847906 kubelet[2112]: I0516 00:43:20.847860 2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.847856799 podStartE2EDuration="1.847856799s" podCreationTimestamp="2025-05-16 00:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:43:20.84086625 +0000 UTC m=+1.149488445" watchObservedRunningTime="2025-05-16 00:43:20.847856799 +0000 UTC m=+1.156478994" May 16 00:43:21.814116 kubelet[2112]: E0516 00:43:21.814060 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:21.814972 kubelet[2112]: E0516 00:43:21.814937 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:21.815221 kubelet[2112]: E0516 00:43:21.815204 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:22.815354 kubelet[2112]: E0516 00:43:22.815311 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:26.137601 kubelet[2112]: E0516 00:43:26.137566 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:26.821334 kubelet[2112]: E0516 00:43:26.821298 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:27.022513 kubelet[2112]: I0516 00:43:27.022470 2112 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 00:43:27.022888 env[1326]: time="2025-05-16T00:43:27.022838295Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:43:27.023210 kubelet[2112]: I0516 00:43:27.023079 2112 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 00:43:27.820543 kubelet[2112]: I0516 00:43:27.820482 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6aa77a03-4258-46fe-8623-81b0d03ee16f-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-bq657\" (UID: \"6aa77a03-4258-46fe-8623-81b0d03ee16f\") " pod="tigera-operator/tigera-operator-7c5755cdcb-bq657" May 16 00:43:27.820543 kubelet[2112]: I0516 00:43:27.820538 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpxm6\" (UniqueName: \"kubernetes.io/projected/6aa77a03-4258-46fe-8623-81b0d03ee16f-kube-api-access-hpxm6\") pod \"tigera-operator-7c5755cdcb-bq657\" (UID: \"6aa77a03-4258-46fe-8623-81b0d03ee16f\") " pod="tigera-operator/tigera-operator-7c5755cdcb-bq657" May 16 00:43:27.823033 kubelet[2112]: E0516 00:43:27.823005 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:27.921503 kubelet[2112]: I0516 00:43:27.921437 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68f98441-6e6f-453c-9d47-de16d52f9759-lib-modules\") pod \"kube-proxy-6kpg8\" (UID: \"68f98441-6e6f-453c-9d47-de16d52f9759\") " pod="kube-system/kube-proxy-6kpg8" May 16 00:43:27.921670 kubelet[2112]: I0516 00:43:27.921516 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/68f98441-6e6f-453c-9d47-de16d52f9759-kube-proxy\") pod \"kube-proxy-6kpg8\" (UID: \"68f98441-6e6f-453c-9d47-de16d52f9759\") " pod="kube-system/kube-proxy-6kpg8" May 16 00:43:27.921670 kubelet[2112]: I0516 00:43:27.921536 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pph2\" (UniqueName: \"kubernetes.io/projected/68f98441-6e6f-453c-9d47-de16d52f9759-kube-api-access-8pph2\") pod \"kube-proxy-6kpg8\" (UID: \"68f98441-6e6f-453c-9d47-de16d52f9759\") " pod="kube-system/kube-proxy-6kpg8" May 16 00:43:27.921670 kubelet[2112]: I0516 00:43:27.921566 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68f98441-6e6f-453c-9d47-de16d52f9759-xtables-lock\") pod \"kube-proxy-6kpg8\" (UID: \"68f98441-6e6f-453c-9d47-de16d52f9759\") " pod="kube-system/kube-proxy-6kpg8" May 16 00:43:27.929103 kubelet[2112]: I0516 00:43:27.929070 2112 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 16 00:43:28.105320 env[1326]: time="2025-05-16T00:43:28.105207950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-bq657,Uid:6aa77a03-4258-46fe-8623-81b0d03ee16f,Namespace:tigera-operator,Attempt:0,}" May 16 00:43:28.134686 env[1326]: time="2025-05-16T00:43:28.134509116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:43:28.134686 env[1326]: time="2025-05-16T00:43:28.134552914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:43:28.134686 env[1326]: time="2025-05-16T00:43:28.134563593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:43:28.135112 env[1326]: time="2025-05-16T00:43:28.135062972Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f94095c2b1644438107a87acf2766e063e68810ed3697668b929693c0242404c pid=2169 runtime=io.containerd.runc.v2 May 16 00:43:28.169148 kubelet[2112]: E0516 00:43:28.168844 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:28.169543 env[1326]: time="2025-05-16T00:43:28.169483962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6kpg8,Uid:68f98441-6e6f-453c-9d47-de16d52f9759,Namespace:kube-system,Attempt:0,}" May 16 00:43:28.186271 env[1326]: time="2025-05-16T00:43:28.185412451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:43:28.186271 env[1326]: time="2025-05-16T00:43:28.185453769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:43:28.186271 env[1326]: time="2025-05-16T00:43:28.185463968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:43:28.186271 env[1326]: time="2025-05-16T00:43:28.185628441Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7babfe7dac7c8ec6dda1a4b8820b6cbef04cc6c7e4ff1b96adc15ccc68475dd4 pid=2202 runtime=io.containerd.runc.v2 May 16 00:43:28.195680 env[1326]: time="2025-05-16T00:43:28.195635220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-bq657,Uid:6aa77a03-4258-46fe-8623-81b0d03ee16f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f94095c2b1644438107a87acf2766e063e68810ed3697668b929693c0242404c\"" May 16 00:43:28.198384 env[1326]: time="2025-05-16T00:43:28.198343026Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 16 00:43:28.232017 env[1326]: time="2025-05-16T00:43:28.231940570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6kpg8,Uid:68f98441-6e6f-453c-9d47-de16d52f9759,Namespace:kube-system,Attempt:0,} returns sandbox id \"7babfe7dac7c8ec6dda1a4b8820b6cbef04cc6c7e4ff1b96adc15ccc68475dd4\"" May 16 00:43:28.232863 kubelet[2112]: E0516 00:43:28.232830 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:28.236265 env[1326]: time="2025-05-16T00:43:28.236212390Z" level=info msg="CreateContainer within sandbox \"7babfe7dac7c8ec6dda1a4b8820b6cbef04cc6c7e4ff1b96adc15ccc68475dd4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:43:28.263190 env[1326]: time="2025-05-16T00:43:28.263135655Z" level=info msg="CreateContainer within sandbox \"7babfe7dac7c8ec6dda1a4b8820b6cbef04cc6c7e4ff1b96adc15ccc68475dd4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f044ac17459506edd2ae405de5a0e51bc8db90a7602ba7df049abf2f1e4c285c\"" May 16 00:43:28.264248 env[1326]: time="2025-05-16T00:43:28.264218930Z" level=info msg="StartContainer for \"f044ac17459506edd2ae405de5a0e51bc8db90a7602ba7df049abf2f1e4c285c\"" May 16 00:43:28.345051 env[1326]: time="2025-05-16T00:43:28.344984686Z" level=info msg="StartContainer for \"f044ac17459506edd2ae405de5a0e51bc8db90a7602ba7df049abf2f1e4c285c\" returns successfully" May 16 00:43:28.514000 audit[2307]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2307 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.517278 kernel: kauditd_printk_skb: 4 callbacks suppressed May 16 00:43:28.517379 kernel: audit: type=1325 audit(1747356208.514:228): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2307 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.517399 kernel: audit: type=1300 audit(1747356208.514:228): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcf40ac30 a2=0 a3=1 items=0 ppid=2257 pid=2307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.514000 audit[2307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcf40ac30 a2=0 a3=1 items=0 ppid=2257 pid=2307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.514000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 16 00:43:28.522133 kernel: audit: type=1327 audit(1747356208.514:228): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 16 00:43:28.514000 audit[2308]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2308 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.523917 kernel: audit: type=1325 audit(1747356208.514:229): table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2308 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.514000 audit[2308]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffceb26b20 a2=0 a3=1 items=0 ppid=2257 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.527044 kernel: audit: type=1300 audit(1747356208.514:229): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffceb26b20 a2=0 a3=1 items=0 ppid=2257 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.514000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 16 00:43:28.528426 kernel: audit: type=1327 audit(1747356208.514:229): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 16 00:43:28.516000 audit[2310]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2310 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.529783 kernel: audit: type=1325 audit(1747356208.516:230): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2310 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.516000 audit[2310]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffebb2fd20 a2=0 a3=1 items=0 ppid=2257 pid=2310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.532824 kernel: audit: type=1300 audit(1747356208.516:230): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffebb2fd20 a2=0 a3=1 items=0 ppid=2257 pid=2310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 16 00:43:28.534355 kernel: audit: type=1327 audit(1747356208.516:230): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 16 00:43:28.519000 audit[2311]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2311 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.535890 kernel: audit: type=1325 audit(1747356208.519:231): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2311 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.519000 audit[2311]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc464ca60 a2=0 a3=1 items=0 ppid=2257 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.519000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 16 00:43:28.519000 audit[2312]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2312 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.519000 audit[2312]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde802dc0 a2=0 a3=1 items=0 ppid=2257 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.519000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 16 00:43:28.521000 audit[2313]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2313 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.521000 audit[2313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffda1f7ba0 a2=0 a3=1 items=0 ppid=2257 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.521000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 16 00:43:28.617000 audit[2314]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2314 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.617000 audit[2314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcd52b720 a2=0 a3=1 items=0 ppid=2257 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.617000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 16 00:43:28.620000 audit[2316]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2316 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.620000 audit[2316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffced61ae0 a2=0 a3=1 items=0 ppid=2257 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.620000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 May 16 00:43:28.627000 audit[2319]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2319 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.627000 audit[2319]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd28b9af0 a2=0 a3=1 items=0 ppid=2257 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.627000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 May 16 00:43:28.628000 audit[2320]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.628000 audit[2320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe68eef30 a2=0 a3=1 items=0 ppid=2257 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.628000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 16 00:43:28.630000 audit[2322]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2322 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.630000 audit[2322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffebe25bf0 a2=0 a3=1 items=0 ppid=2257 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.630000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 16 00:43:28.632000 audit[2323]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2323 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.632000 audit[2323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1199b00 a2=0 a3=1 items=0 ppid=2257 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.632000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 16 00:43:28.634000 audit[2325]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2325 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.634000 audit[2325]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffed6b5660 a2=0 a3=1 items=0 ppid=2257 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.634000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 16 00:43:28.637000 audit[2328]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2328 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.637000 audit[2328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffcfb3ac0 a2=0 a3=1 items=0 ppid=2257 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.637000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 May 16 00:43:28.639000 audit[2329]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.639000 audit[2329]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc630cf60 a2=0 a3=1 items=0 ppid=2257 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.639000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 16 00:43:28.644000 audit[2331]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2331 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.644000 audit[2331]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe1640110 a2=0 a3=1 items=0 ppid=2257 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.644000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 16 00:43:28.645000 audit[2332]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.645000 audit[2332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe52fd350 a2=0 a3=1 items=0 ppid=2257 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.645000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 16 00:43:28.648000 audit[2334]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2334 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.648000 audit[2334]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc4deed80 a2=0 a3=1 items=0 ppid=2257 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.648000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 16 00:43:28.652000 audit[2337]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2337 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.652000 audit[2337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe0649550 a2=0 a3=1 items=0 ppid=2257 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.652000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 16 00:43:28.655000 audit[2340]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.655000 audit[2340]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd3f4d810 a2=0 a3=1 items=0 ppid=2257 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.655000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 16 00:43:28.657000 audit[2341]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.657000 audit[2341]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc7187b50 a2=0 a3=1 items=0 ppid=2257 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.657000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 16 00:43:28.659000 audit[2343]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2343 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.659000 audit[2343]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff89b4830 a2=0 a3=1 items=0 ppid=2257 pid=2343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.659000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 16 00:43:28.663000 audit[2346]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2346 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.663000 audit[2346]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffeccfd2d0 a2=0 a3=1 items=0 ppid=2257 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.663000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 16 00:43:28.664000 audit[2347]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.664000 audit[2347]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe153e280 a2=0 a3=1 items=0 ppid=2257 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.664000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 16 00:43:28.666000 audit[2349]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2349 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 16 00:43:28.666000 audit[2349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffe0a6e8d0 a2=0 a3=1 items=0 ppid=2257 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.666000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 16 00:43:28.691000 audit[2355]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:28.691000 audit[2355]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc9403700 a2=0 a3=1 items=0 ppid=2257 pid=2355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.691000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:28.704000 audit[2355]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:28.704000 audit[2355]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffc9403700 a2=0 a3=1 items=0 ppid=2257 pid=2355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.704000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:28.706000 audit[2360]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.706000 audit[2360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcdcbee40 a2=0 a3=1 items=0 ppid=2257 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.706000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 16 00:43:28.708000 audit[2362]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2362 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.708000 audit[2362]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd46a12e0 a2=0 a3=1 items=0 ppid=2257 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.708000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 May 16 00:43:28.712000 audit[2365]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2365 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.712000 audit[2365]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffed2cabf0 a2=0 a3=1 items=0 ppid=2257 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.712000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 May 16 00:43:28.713000 audit[2366]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2366 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.713000 audit[2366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4225380 a2=0 a3=1 items=0 ppid=2257 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.713000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 16 00:43:28.715000 audit[2368]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2368 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.715000 audit[2368]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdd9631a0 a2=0 a3=1 items=0 ppid=2257 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.715000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 16 00:43:28.717000 audit[2369]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2369 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.717000 audit[2369]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9113e30 a2=0 a3=1 items=0 ppid=2257 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.717000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 16 00:43:28.719000 audit[2371]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2371 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.719000 audit[2371]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe0ff7150 a2=0 a3=1 items=0 ppid=2257 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.719000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 May 16 00:43:28.722000 audit[2374]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2374 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.722000 audit[2374]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffff03f9d10 a2=0 a3=1 items=0 ppid=2257 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.722000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 16 00:43:28.723000 audit[2375]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2375 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.723000 audit[2375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdee35620 a2=0 a3=1 items=0 ppid=2257 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.723000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 16 00:43:28.726000 audit[2377]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.726000 audit[2377]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff1e57710 a2=0 a3=1 items=0 ppid=2257 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.726000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 16 00:43:28.727000 audit[2378]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.727000 audit[2378]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc295bd20 a2=0 a3=1 items=0 ppid=2257 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.727000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 16 00:43:28.729000 audit[2380]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2380 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.729000 audit[2380]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffda3384d0 a2=0 a3=1 items=0 ppid=2257 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.729000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 16 00:43:28.733000 audit[2383]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2383 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.733000 audit[2383]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd6321100 a2=0 a3=1 items=0 ppid=2257 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.733000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 16 00:43:28.736000 audit[2386]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.736000 audit[2386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe7581090 a2=0 a3=1 items=0 ppid=2257 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.736000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C May 16 00:43:28.737000 audit[2387]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.737000 audit[2387]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd3338cd0 a2=0 a3=1 items=0 ppid=2257 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.737000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 16 00:43:28.740000 audit[2389]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.740000 audit[2389]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffdbf7ef80 a2=0 a3=1 items=0 ppid=2257 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.740000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 16 00:43:28.744000 audit[2392]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2392 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.744000 audit[2392]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd760add0 a2=0 a3=1 items=0 ppid=2257 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.744000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 16 00:43:28.745000 audit[2393]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.745000 audit[2393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe5c421f0 a2=0 a3=1 items=0 ppid=2257 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.745000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 16 00:43:28.747000 audit[2395]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.747000 audit[2395]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc6e2ffb0 a2=0 a3=1 items=0 ppid=2257 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.747000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 16 00:43:28.748000 audit[2396]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.748000 audit[2396]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdea318f0 a2=0 a3=1 items=0 ppid=2257 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.748000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 16 00:43:28.751000 audit[2398]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2398 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.751000 audit[2398]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcddc8ad0 a2=0 a3=1 items=0 ppid=2257 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.751000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 16 00:43:28.754000 audit[2401]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2401 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 16 00:43:28.754000 audit[2401]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe10b9420 a2=0 a3=1 items=0 ppid=2257 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.754000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 16 00:43:28.758000 audit[2403]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 16 00:43:28.758000 audit[2403]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffca36ae10 a2=0 a3=1 items=0 ppid=2257 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.758000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:28.759000 audit[2403]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 16 00:43:28.759000 audit[2403]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffca36ae10 a2=0 a3=1 items=0 ppid=2257 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:28.759000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:28.822333 kubelet[2112]: E0516 00:43:28.822219 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:28.826531 kubelet[2112]: E0516 00:43:28.826497 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:28.831469 kubelet[2112]: E0516 00:43:28.831394 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:28.838403 kubelet[2112]: I0516 00:43:28.838337 2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6kpg8" podStartSLOduration=1.838320136 podStartE2EDuration="1.838320136s" podCreationTimestamp="2025-05-16 00:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:43:28.836379498 +0000 UTC m=+9.145001693" watchObservedRunningTime="2025-05-16 00:43:28.838320136 +0000 UTC m=+9.146942291" May 16 00:43:29.257498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1876092920.mount: Deactivated successfully. May 16 00:43:29.831384 env[1326]: time="2025-05-16T00:43:29.831342727Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:29.832938 env[1326]: time="2025-05-16T00:43:29.832902904Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:29.834303 env[1326]: time="2025-05-16T00:43:29.834269210Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:29.836458 env[1326]: time="2025-05-16T00:43:29.836420844Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:29.836956 env[1326]: time="2025-05-16T00:43:29.836923504Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\"" May 16 00:43:29.839488 env[1326]: time="2025-05-16T00:43:29.839428564Z" level=info msg="CreateContainer within sandbox \"f94095c2b1644438107a87acf2766e063e68810ed3697668b929693c0242404c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 16 00:43:29.849621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3912174495.mount: Deactivated successfully. May 16 00:43:29.851122 env[1326]: time="2025-05-16T00:43:29.850868868Z" level=info msg="CreateContainer within sandbox \"f94095c2b1644438107a87acf2766e063e68810ed3697668b929693c0242404c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b5fe44a74596618b8b2187112b0840a5cd48cb30d382d058addcb8788e5f8928\"" May 16 00:43:29.851582 env[1326]: time="2025-05-16T00:43:29.851543601Z" level=info msg="StartContainer for \"b5fe44a74596618b8b2187112b0840a5cd48cb30d382d058addcb8788e5f8928\"" May 16 00:43:29.919200 env[1326]: time="2025-05-16T00:43:29.919145586Z" level=info msg="StartContainer for \"b5fe44a74596618b8b2187112b0840a5cd48cb30d382d058addcb8788e5f8928\" returns successfully" May 16 00:43:30.401293 kubelet[2112]: E0516 00:43:30.401248 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:30.844099 kubelet[2112]: I0516 00:43:30.843811 2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-bq657" podStartSLOduration=2.2027270899999998 podStartE2EDuration="3.843785827s" podCreationTimestamp="2025-05-16 00:43:27 +0000 UTC" firstStartedPulling="2025-05-16 00:43:28.197076559 +0000 UTC m=+8.505698714" lastFinishedPulling="2025-05-16 00:43:29.838135256 +0000 UTC m=+10.146757451" observedRunningTime="2025-05-16 00:43:30.84372383 +0000 UTC m=+11.152346025" watchObservedRunningTime="2025-05-16 00:43:30.843785827 +0000 UTC m=+11.152408022" May 16 00:43:32.142897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5fe44a74596618b8b2187112b0840a5cd48cb30d382d058addcb8788e5f8928-rootfs.mount: Deactivated successfully. May 16 00:43:32.250104 env[1326]: time="2025-05-16T00:43:32.250056176Z" level=info msg="shim disconnected" id=b5fe44a74596618b8b2187112b0840a5cd48cb30d382d058addcb8788e5f8928 May 16 00:43:32.250675 env[1326]: time="2025-05-16T00:43:32.250639517Z" level=warning msg="cleaning up after shim disconnected" id=b5fe44a74596618b8b2187112b0840a5cd48cb30d382d058addcb8788e5f8928 namespace=k8s.io May 16 00:43:32.250675 env[1326]: time="2025-05-16T00:43:32.250668756Z" level=info msg="cleaning up dead shim" May 16 00:43:32.307523 env[1326]: time="2025-05-16T00:43:32.307466951Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:43:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2465 runtime=io.containerd.runc.v2\n" May 16 00:43:32.839147 kubelet[2112]: I0516 00:43:32.839107 2112 scope.go:117] "RemoveContainer" containerID="b5fe44a74596618b8b2187112b0840a5cd48cb30d382d058addcb8788e5f8928" May 16 00:43:32.848379 env[1326]: time="2025-05-16T00:43:32.848299183Z" level=info msg="CreateContainer within sandbox \"f94095c2b1644438107a87acf2766e063e68810ed3697668b929693c0242404c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 16 00:43:32.869207 env[1326]: time="2025-05-16T00:43:32.868929604Z" level=info msg="CreateContainer within sandbox \"f94095c2b1644438107a87acf2766e063e68810ed3697668b929693c0242404c\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"5d8deeca47b1cfab492228f0df1d11eb7b4b81cf5e3dfc22da5acb472b67ac6a\"" May 16 00:43:32.869941 env[1326]: time="2025-05-16T00:43:32.869910970Z" level=info msg="StartContainer for \"5d8deeca47b1cfab492228f0df1d11eb7b4b81cf5e3dfc22da5acb472b67ac6a\"" May 16 00:43:32.982507 env[1326]: time="2025-05-16T00:43:32.982434477Z" level=info msg="StartContainer for \"5d8deeca47b1cfab492228f0df1d11eb7b4b81cf5e3dfc22da5acb472b67ac6a\" returns successfully" May 16 00:43:35.243930 sudo[1489]: pam_unix(sudo:session): session closed for user root May 16 00:43:35.243000 audit[1489]: USER_END pid=1489 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 16 00:43:35.246786 kernel: kauditd_printk_skb: 143 callbacks suppressed May 16 00:43:35.246878 kernel: audit: type=1106 audit(1747356215.243:279): pid=1489 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 16 00:43:35.246932 kernel: audit: type=1104 audit(1747356215.243:280): pid=1489 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 16 00:43:35.243000 audit[1489]: CRED_DISP pid=1489 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 16 00:43:35.250284 sshd[1483]: pam_unix(sshd:session): session closed for user core May 16 00:43:35.250000 audit[1483]: USER_END pid=1483 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:43:35.250000 audit[1483]: CRED_DISP pid=1483 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:43:35.254933 systemd[1]: sshd@6-10.0.0.81:22-10.0.0.1:50030.service: Deactivated successfully. May 16 00:43:35.256225 systemd[1]: session-7.scope: Deactivated successfully. May 16 00:43:35.256411 kernel: audit: type=1106 audit(1747356215.250:281): pid=1483 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:43:35.256466 kernel: audit: type=1104 audit(1747356215.250:282): pid=1483 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:43:35.256560 systemd-logind[1310]: Session 7 logged out. Waiting for processes to exit. May 16 00:43:35.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.81:22-10.0.0.1:50030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:35.259043 kernel: audit: type=1131 audit(1747356215.253:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.81:22-10.0.0.1:50030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:35.259344 systemd-logind[1310]: Removed session 7. May 16 00:43:36.975071 update_engine[1312]: I0516 00:43:36.975021 1312 update_attempter.cc:509] Updating boot flags... May 16 00:43:38.074000 audit[2575]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:38.074000 audit[2575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffe944b30 a2=0 a3=1 items=0 ppid=2257 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:38.080650 kernel: audit: type=1325 audit(1747356218.074:284): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:38.080733 kernel: audit: type=1300 audit(1747356218.074:284): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffe944b30 a2=0 a3=1 items=0 ppid=2257 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:38.080766 kernel: audit: type=1327 audit(1747356218.074:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:38.074000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:38.089000 audit[2575]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:38.089000 audit[2575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe944b30 a2=0 a3=1 items=0 ppid=2257 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:38.094850 kernel: audit: type=1325 audit(1747356218.089:285): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:38.094925 kernel: audit: type=1300 audit(1747356218.089:285): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe944b30 a2=0 a3=1 items=0 ppid=2257 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:38.089000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:38.114000 audit[2577]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2577 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:38.114000 audit[2577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc1e8afa0 a2=0 a3=1 items=0 ppid=2257 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:38.114000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:38.121000 audit[2577]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2577 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:38.121000 audit[2577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc1e8afa0 a2=0 a3=1 items=0 ppid=2257 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:38.121000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:41.559000 audit[2579]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2579 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:41.561248 kernel: kauditd_printk_skb: 7 callbacks suppressed May 16 00:43:41.561326 kernel: audit: type=1325 audit(1747356221.559:288): table=filter:93 family=2 entries=17 op=nft_register_rule pid=2579 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:41.559000 audit[2579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc8f56b20 a2=0 a3=1 items=0 ppid=2257 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:41.566984 kernel: audit: type=1300 audit(1747356221.559:288): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc8f56b20 a2=0 a3=1 items=0 ppid=2257 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:41.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:41.568974 kernel: audit: type=1327 audit(1747356221.559:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:41.569000 audit[2579]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2579 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:41.569000 audit[2579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc8f56b20 a2=0 a3=1 items=0 ppid=2257 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:41.575441 kernel: audit: type=1325 audit(1747356221.569:289): table=nat:94 family=2 entries=12 op=nft_register_rule pid=2579 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:41.575487 kernel: audit: type=1300 audit(1747356221.569:289): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc8f56b20 a2=0 a3=1 items=0 ppid=2257 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:41.575517 kernel: audit: type=1327 audit(1747356221.569:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:41.569000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:41.590000 audit[2582]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:41.593977 kernel: audit: type=1325 audit(1747356221.590:290): table=filter:95 family=2 entries=18 op=nft_register_rule pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:41.590000 audit[2582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffeb223eb0 a2=0 a3=1 items=0 ppid=2257 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:41.590000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:41.599395 kernel: audit: type=1300 audit(1747356221.590:290): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffeb223eb0 a2=0 a3=1 items=0 ppid=2257 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:41.599442 kernel: audit: type=1327 audit(1747356221.590:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:41.600000 audit[2582]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:41.600000 audit[2582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffeb223eb0 a2=0 a3=1 items=0 ppid=2257 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:41.600000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:41.602992 kernel: audit: type=1325 audit(1747356221.600:291): table=nat:96 family=2 entries=12 op=nft_register_rule pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:41.918588 kubelet[2112]: I0516 00:43:41.918546 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e93f6e65-2d95-4b9a-ac3c-d91ed2376cb2-typha-certs\") pod \"calico-typha-54d9df8d8b-mcqzf\" (UID: \"e93f6e65-2d95-4b9a-ac3c-d91ed2376cb2\") " pod="calico-system/calico-typha-54d9df8d8b-mcqzf" May 16 00:43:41.919108 kubelet[2112]: I0516 00:43:41.919082 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e93f6e65-2d95-4b9a-ac3c-d91ed2376cb2-tigera-ca-bundle\") pod \"calico-typha-54d9df8d8b-mcqzf\" (UID: \"e93f6e65-2d95-4b9a-ac3c-d91ed2376cb2\") " pod="calico-system/calico-typha-54d9df8d8b-mcqzf" May 16 00:43:41.919206 kubelet[2112]: I0516 00:43:41.919191 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tncns\" (UniqueName: \"kubernetes.io/projected/e93f6e65-2d95-4b9a-ac3c-d91ed2376cb2-kube-api-access-tncns\") pod \"calico-typha-54d9df8d8b-mcqzf\" (UID: \"e93f6e65-2d95-4b9a-ac3c-d91ed2376cb2\") " pod="calico-system/calico-typha-54d9df8d8b-mcqzf" May 16 00:43:42.019675 kubelet[2112]: I0516 00:43:42.019635 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c4b66489-09d1-4b14-b39d-3d94a1253ee6-var-lib-calico\") pod \"calico-node-n94jj\" (UID: \"c4b66489-09d1-4b14-b39d-3d94a1253ee6\") " pod="calico-system/calico-node-n94jj" May 16 00:43:42.019829 kubelet[2112]: I0516 00:43:42.019671 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c4b66489-09d1-4b14-b39d-3d94a1253ee6-cni-net-dir\") pod \"calico-node-n94jj\" (UID: \"c4b66489-09d1-4b14-b39d-3d94a1253ee6\") " pod="calico-system/calico-node-n94jj" May 16 00:43:42.019829 kubelet[2112]: I0516 00:43:42.019715 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4b66489-09d1-4b14-b39d-3d94a1253ee6-lib-modules\") pod \"calico-node-n94jj\" (UID: \"c4b66489-09d1-4b14-b39d-3d94a1253ee6\") " pod="calico-system/calico-node-n94jj" May 16 00:43:42.019829 kubelet[2112]: I0516 00:43:42.019731 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c4b66489-09d1-4b14-b39d-3d94a1253ee6-flexvol-driver-host\") pod \"calico-node-n94jj\" (UID: \"c4b66489-09d1-4b14-b39d-3d94a1253ee6\") " pod="calico-system/calico-node-n94jj" May 16 00:43:42.019829 kubelet[2112]: I0516 00:43:42.019746 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c4b66489-09d1-4b14-b39d-3d94a1253ee6-node-certs\") pod \"calico-node-n94jj\" (UID: \"c4b66489-09d1-4b14-b39d-3d94a1253ee6\") " pod="calico-system/calico-node-n94jj" May 16 00:43:42.019829 kubelet[2112]: I0516 00:43:42.019761 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4b66489-09d1-4b14-b39d-3d94a1253ee6-tigera-ca-bundle\") pod \"calico-node-n94jj\" (UID: \"c4b66489-09d1-4b14-b39d-3d94a1253ee6\") " pod="calico-system/calico-node-n94jj" May 16 00:43:42.020067 kubelet[2112]: I0516 00:43:42.019779 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c4b66489-09d1-4b14-b39d-3d94a1253ee6-var-run-calico\") pod \"calico-node-n94jj\" (UID: \"c4b66489-09d1-4b14-b39d-3d94a1253ee6\") " pod="calico-system/calico-node-n94jj" May 16 00:43:42.020067 kubelet[2112]: I0516 00:43:42.019808 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c4b66489-09d1-4b14-b39d-3d94a1253ee6-cni-log-dir\") pod \"calico-node-n94jj\" (UID: \"c4b66489-09d1-4b14-b39d-3d94a1253ee6\") " pod="calico-system/calico-node-n94jj" May 16 00:43:42.020067 kubelet[2112]: I0516 00:43:42.019837 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4dgj\" (UniqueName: \"kubernetes.io/projected/c4b66489-09d1-4b14-b39d-3d94a1253ee6-kube-api-access-n4dgj\") pod \"calico-node-n94jj\" (UID: \"c4b66489-09d1-4b14-b39d-3d94a1253ee6\") " pod="calico-system/calico-node-n94jj" May 16 00:43:42.020067 kubelet[2112]: I0516 00:43:42.019857 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c4b66489-09d1-4b14-b39d-3d94a1253ee6-cni-bin-dir\") pod \"calico-node-n94jj\" (UID: \"c4b66489-09d1-4b14-b39d-3d94a1253ee6\") " pod="calico-system/calico-node-n94jj" May 16 00:43:42.020067 kubelet[2112]: I0516 00:43:42.019874 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c4b66489-09d1-4b14-b39d-3d94a1253ee6-policysync\") pod \"calico-node-n94jj\" (UID: \"c4b66489-09d1-4b14-b39d-3d94a1253ee6\") " pod="calico-system/calico-node-n94jj" May 16 00:43:42.020186 kubelet[2112]: I0516 00:43:42.019889 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4b66489-09d1-4b14-b39d-3d94a1253ee6-xtables-lock\") pod \"calico-node-n94jj\" (UID: \"c4b66489-09d1-4b14-b39d-3d94a1253ee6\") " pod="calico-system/calico-node-n94jj" May 16 00:43:42.067886 kubelet[2112]: E0516 00:43:42.067847 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:42.068389 env[1326]: time="2025-05-16T00:43:42.068347028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54d9df8d8b-mcqzf,Uid:e93f6e65-2d95-4b9a-ac3c-d91ed2376cb2,Namespace:calico-system,Attempt:0,}" May 16 00:43:42.090323 env[1326]: time="2025-05-16T00:43:42.090247257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:43:42.090323 env[1326]: time="2025-05-16T00:43:42.090294816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:43:42.090502 env[1326]: time="2025-05-16T00:43:42.090306136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:43:42.090502 env[1326]: time="2025-05-16T00:43:42.090459133Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd0da2123a91be5180cde2dd75c21022792c3958e8cb4182cb96485259fe431a pid=2593 runtime=io.containerd.runc.v2 May 16 00:43:42.122643 kubelet[2112]: E0516 00:43:42.121063 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.122643 kubelet[2112]: W0516 00:43:42.121097 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.122643 kubelet[2112]: E0516 00:43:42.121122 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.122643 kubelet[2112]: E0516 00:43:42.121294 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.122643 kubelet[2112]: W0516 00:43:42.121304 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.122643 kubelet[2112]: E0516 00:43:42.121332 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.124080 kubelet[2112]: E0516 00:43:42.124061 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.124176 kubelet[2112]: W0516 00:43:42.124161 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.124241 kubelet[2112]: E0516 00:43:42.124229 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.124548 kubelet[2112]: E0516 00:43:42.124534 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.124634 kubelet[2112]: W0516 00:43:42.124620 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.124749 kubelet[2112]: E0516 00:43:42.124723 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.125010 kubelet[2112]: E0516 00:43:42.124997 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.125106 kubelet[2112]: W0516 00:43:42.125083 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.125196 kubelet[2112]: E0516 00:43:42.125174 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.125432 kubelet[2112]: E0516 00:43:42.125418 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.125524 kubelet[2112]: W0516 00:43:42.125510 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.125631 kubelet[2112]: E0516 00:43:42.125608 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.125838 kubelet[2112]: E0516 00:43:42.125825 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.125914 kubelet[2112]: W0516 00:43:42.125900 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.126062 kubelet[2112]: E0516 00:43:42.126039 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.126213 kubelet[2112]: E0516 00:43:42.126202 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.126279 kubelet[2112]: W0516 00:43:42.126268 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.126386 kubelet[2112]: E0516 00:43:42.126360 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.126588 kubelet[2112]: E0516 00:43:42.126575 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.126655 kubelet[2112]: W0516 00:43:42.126643 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.126804 kubelet[2112]: E0516 00:43:42.126789 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.126936 kubelet[2112]: E0516 00:43:42.126907 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.127010 kubelet[2112]: W0516 00:43:42.126998 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.127150 kubelet[2112]: E0516 00:43:42.127123 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.127335 kubelet[2112]: E0516 00:43:42.127324 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.127424 kubelet[2112]: W0516 00:43:42.127412 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.127541 kubelet[2112]: E0516 00:43:42.127521 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.127730 kubelet[2112]: E0516 00:43:42.127718 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.127800 kubelet[2112]: W0516 00:43:42.127787 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.127897 kubelet[2112]: E0516 00:43:42.127875 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.128133 kubelet[2112]: E0516 00:43:42.128120 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.128214 kubelet[2112]: W0516 00:43:42.128201 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.128307 kubelet[2112]: E0516 00:43:42.128285 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.128518 kubelet[2112]: E0516 00:43:42.128506 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.128593 kubelet[2112]: W0516 00:43:42.128581 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.128736 kubelet[2112]: E0516 00:43:42.128724 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.128918 kubelet[2112]: E0516 00:43:42.128906 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.129012 kubelet[2112]: W0516 00:43:42.128999 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.129170 kubelet[2112]: E0516 00:43:42.129116 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.129851 kubelet[2112]: E0516 00:43:42.129828 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.129942 kubelet[2112]: W0516 00:43:42.129929 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.130154 kubelet[2112]: E0516 00:43:42.130141 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.130273 kubelet[2112]: E0516 00:43:42.130262 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.130372 kubelet[2112]: W0516 00:43:42.130360 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.130477 kubelet[2112]: E0516 00:43:42.130457 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.130687 kubelet[2112]: E0516 00:43:42.130675 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.130761 kubelet[2112]: W0516 00:43:42.130749 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.130895 kubelet[2112]: E0516 00:43:42.130883 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.131066 kubelet[2112]: E0516 00:43:42.131055 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.131132 kubelet[2112]: W0516 00:43:42.131121 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.131268 kubelet[2112]: E0516 00:43:42.131256 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.131399 kubelet[2112]: E0516 00:43:42.131389 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.131549 kubelet[2112]: W0516 00:43:42.131532 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.131715 kubelet[2112]: E0516 00:43:42.131680 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.132002 kubelet[2112]: E0516 00:43:42.131989 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.132117 kubelet[2112]: W0516 00:43:42.132102 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.132286 kubelet[2112]: E0516 00:43:42.132273 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.136004 kubelet[2112]: E0516 00:43:42.135986 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.136111 kubelet[2112]: W0516 00:43:42.136095 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.136258 kubelet[2112]: E0516 00:43:42.136244 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.136560 kubelet[2112]: E0516 00:43:42.136541 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.136646 kubelet[2112]: W0516 00:43:42.136633 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.136781 kubelet[2112]: E0516 00:43:42.136769 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.137076 kubelet[2112]: E0516 00:43:42.137062 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.137170 kubelet[2112]: W0516 00:43:42.137157 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.137306 kubelet[2112]: E0516 00:43:42.137294 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.137564 kubelet[2112]: E0516 00:43:42.137550 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.137647 kubelet[2112]: W0516 00:43:42.137634 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.137769 kubelet[2112]: E0516 00:43:42.137756 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.138015 kubelet[2112]: E0516 00:43:42.138002 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.138098 kubelet[2112]: W0516 00:43:42.138085 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.138221 kubelet[2112]: E0516 00:43:42.138209 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.138452 kubelet[2112]: E0516 00:43:42.138441 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.138532 kubelet[2112]: W0516 00:43:42.138518 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.138598 kubelet[2112]: E0516 00:43:42.138586 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.190688 env[1326]: time="2025-05-16T00:43:42.187538737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54d9df8d8b-mcqzf,Uid:e93f6e65-2d95-4b9a-ac3c-d91ed2376cb2,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd0da2123a91be5180cde2dd75c21022792c3958e8cb4182cb96485259fe431a\"" May 16 00:43:42.193212 kubelet[2112]: E0516 00:43:42.193188 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:42.195411 env[1326]: time="2025-05-16T00:43:42.195378975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 16 00:43:42.207772 kubelet[2112]: E0516 00:43:42.207713 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2kkd" podUID="00bfb5c0-df56-4053-a2df-e7346d66a58a" May 16 00:43:42.221201 kubelet[2112]: E0516 00:43:42.220805 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.221201 kubelet[2112]: W0516 00:43:42.220828 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.221201 kubelet[2112]: E0516 00:43:42.220845 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.221201 kubelet[2112]: E0516 00:43:42.221083 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.221201 kubelet[2112]: W0516 00:43:42.221093 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.221201 kubelet[2112]: E0516 00:43:42.221103 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.221469 kubelet[2112]: E0516 00:43:42.221286 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.221469 kubelet[2112]: W0516 00:43:42.221297 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.221469 kubelet[2112]: E0516 00:43:42.221307 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.221645 kubelet[2112]: E0516 00:43:42.221545 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.221645 kubelet[2112]: W0516 00:43:42.221560 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.221645 kubelet[2112]: E0516 00:43:42.221573 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.221855 kubelet[2112]: E0516 00:43:42.221842 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.221888 kubelet[2112]: W0516 00:43:42.221855 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.221888 kubelet[2112]: E0516 00:43:42.221865 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.222029 kubelet[2112]: E0516 00:43:42.222018 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.222029 kubelet[2112]: W0516 00:43:42.222028 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.222103 kubelet[2112]: E0516 00:43:42.222038 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.222171 kubelet[2112]: E0516 00:43:42.222162 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.222171 kubelet[2112]: W0516 00:43:42.222171 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.222237 kubelet[2112]: E0516 00:43:42.222178 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.222311 kubelet[2112]: E0516 00:43:42.222303 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.222311 kubelet[2112]: W0516 00:43:42.222311 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.222374 kubelet[2112]: E0516 00:43:42.222319 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.222454 kubelet[2112]: E0516 00:43:42.222445 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.222507 kubelet[2112]: W0516 00:43:42.222456 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.222507 kubelet[2112]: E0516 00:43:42.222463 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.222602 kubelet[2112]: E0516 00:43:42.222593 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.222602 kubelet[2112]: W0516 00:43:42.222602 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.222664 kubelet[2112]: E0516 00:43:42.222609 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.222742 kubelet[2112]: E0516 00:43:42.222733 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.222742 kubelet[2112]: W0516 00:43:42.222742 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.222819 kubelet[2112]: E0516 00:43:42.222749 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.222877 kubelet[2112]: E0516 00:43:42.222868 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.222877 kubelet[2112]: W0516 00:43:42.222877 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.222943 kubelet[2112]: E0516 00:43:42.222884 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.223028 kubelet[2112]: E0516 00:43:42.223019 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.223068 kubelet[2112]: W0516 00:43:42.223028 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.223068 kubelet[2112]: E0516 00:43:42.223035 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.223165 kubelet[2112]: E0516 00:43:42.223152 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.223165 kubelet[2112]: W0516 00:43:42.223158 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.223165 kubelet[2112]: E0516 00:43:42.223164 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.223283 kubelet[2112]: E0516 00:43:42.223275 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.223283 kubelet[2112]: W0516 00:43:42.223283 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.223356 kubelet[2112]: E0516 00:43:42.223293 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.223420 kubelet[2112]: E0516 00:43:42.223411 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.223420 kubelet[2112]: W0516 00:43:42.223420 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.223477 kubelet[2112]: E0516 00:43:42.223427 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.223570 kubelet[2112]: E0516 00:43:42.223561 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.223570 kubelet[2112]: W0516 00:43:42.223570 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.223631 kubelet[2112]: E0516 00:43:42.223577 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.223727 kubelet[2112]: E0516 00:43:42.223718 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.223727 kubelet[2112]: W0516 00:43:42.223727 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.223794 kubelet[2112]: E0516 00:43:42.223734 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.223859 kubelet[2112]: E0516 00:43:42.223850 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.223859 kubelet[2112]: W0516 00:43:42.223859 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.223920 kubelet[2112]: E0516 00:43:42.223866 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.224012 kubelet[2112]: E0516 00:43:42.224002 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.224012 kubelet[2112]: W0516 00:43:42.224012 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.224072 kubelet[2112]: E0516 00:43:42.224019 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.309373 env[1326]: time="2025-05-16T00:43:42.309319632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n94jj,Uid:c4b66489-09d1-4b14-b39d-3d94a1253ee6,Namespace:calico-system,Attempt:0,}" May 16 00:43:42.321474 kubelet[2112]: E0516 00:43:42.321299 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.321474 kubelet[2112]: W0516 00:43:42.321321 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.321474 kubelet[2112]: E0516 00:43:42.321341 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.321474 kubelet[2112]: I0516 00:43:42.321369 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/00bfb5c0-df56-4053-a2df-e7346d66a58a-socket-dir\") pod \"csi-node-driver-d2kkd\" (UID: \"00bfb5c0-df56-4053-a2df-e7346d66a58a\") " pod="calico-system/csi-node-driver-d2kkd" May 16 00:43:42.321925 kubelet[2112]: E0516 00:43:42.321750 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.321925 kubelet[2112]: W0516 00:43:42.321766 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.321925 kubelet[2112]: E0516 00:43:42.321783 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.321925 kubelet[2112]: I0516 00:43:42.321802 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z88x\" (UniqueName: \"kubernetes.io/projected/00bfb5c0-df56-4053-a2df-e7346d66a58a-kube-api-access-2z88x\") pod \"csi-node-driver-d2kkd\" (UID: \"00bfb5c0-df56-4053-a2df-e7346d66a58a\") " pod="calico-system/csi-node-driver-d2kkd" May 16 00:43:42.322277 kubelet[2112]: E0516 00:43:42.322128 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.322277 kubelet[2112]: W0516 00:43:42.322142 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.322277 kubelet[2112]: E0516 00:43:42.322155 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.322277 kubelet[2112]: I0516 00:43:42.322172 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/00bfb5c0-df56-4053-a2df-e7346d66a58a-registration-dir\") pod \"csi-node-driver-d2kkd\" (UID: \"00bfb5c0-df56-4053-a2df-e7346d66a58a\") " pod="calico-system/csi-node-driver-d2kkd" May 16 00:43:42.322608 kubelet[2112]: E0516 00:43:42.322459 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.322608 kubelet[2112]: W0516 00:43:42.322473 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.322608 kubelet[2112]: E0516 00:43:42.322495 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.322608 kubelet[2112]: I0516 00:43:42.322512 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/00bfb5c0-df56-4053-a2df-e7346d66a58a-varrun\") pod \"csi-node-driver-d2kkd\" (UID: \"00bfb5c0-df56-4053-a2df-e7346d66a58a\") " pod="calico-system/csi-node-driver-d2kkd" May 16 00:43:42.322900 kubelet[2112]: E0516 00:43:42.322796 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.322900 kubelet[2112]: W0516 00:43:42.322813 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.322900 kubelet[2112]: E0516 00:43:42.322863 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.322900 kubelet[2112]: I0516 00:43:42.322892 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00bfb5c0-df56-4053-a2df-e7346d66a58a-kubelet-dir\") pod \"csi-node-driver-d2kkd\" (UID: \"00bfb5c0-df56-4053-a2df-e7346d66a58a\") " pod="calico-system/csi-node-driver-d2kkd" May 16 00:43:42.323218 kubelet[2112]: E0516 00:43:42.323099 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.323218 kubelet[2112]: W0516 00:43:42.323112 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.323218 kubelet[2112]: E0516 00:43:42.323199 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.323501 kubelet[2112]: E0516 00:43:42.323366 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.323501 kubelet[2112]: W0516 00:43:42.323379 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.323501 kubelet[2112]: E0516 00:43:42.323477 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.323759 kubelet[2112]: E0516 00:43:42.323647 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.323759 kubelet[2112]: W0516 00:43:42.323658 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.323759 kubelet[2112]: E0516 00:43:42.323744 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.324034 kubelet[2112]: E0516 00:43:42.323898 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.324034 kubelet[2112]: W0516 00:43:42.323909 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.324138 kubelet[2112]: E0516 00:43:42.324026 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.324334 kubelet[2112]: E0516 00:43:42.324211 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.324334 kubelet[2112]: W0516 00:43:42.324223 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.324334 kubelet[2112]: E0516 00:43:42.324245 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.324709 env[1326]: time="2025-05-16T00:43:42.324437481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:43:42.324709 env[1326]: time="2025-05-16T00:43:42.324479041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:43:42.324709 env[1326]: time="2025-05-16T00:43:42.324495880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:43:42.324709 env[1326]: time="2025-05-16T00:43:42.324681516Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/571235bbdf7a48a43a5d620a96d94a0de53c74ef1782d033f81f36535ade65ad pid=2693 runtime=io.containerd.runc.v2 May 16 00:43:42.324884 kubelet[2112]: E0516 00:43:42.324501 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.324884 kubelet[2112]: W0516 00:43:42.324516 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.324884 kubelet[2112]: E0516 00:43:42.324527 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.325171 kubelet[2112]: E0516 00:43:42.325050 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.325171 kubelet[2112]: W0516 00:43:42.325063 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.325171 kubelet[2112]: E0516 00:43:42.325075 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.325456 kubelet[2112]: E0516 00:43:42.325334 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.325456 kubelet[2112]: W0516 00:43:42.325344 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.325456 kubelet[2112]: E0516 00:43:42.325354 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.325749 kubelet[2112]: E0516 00:43:42.325626 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.325749 kubelet[2112]: W0516 00:43:42.325637 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.325749 kubelet[2112]: E0516 00:43:42.325648 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.326034 kubelet[2112]: E0516 00:43:42.325914 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.326034 kubelet[2112]: W0516 00:43:42.325924 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.326034 kubelet[2112]: E0516 00:43:42.325934 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.396471 env[1326]: time="2025-05-16T00:43:42.394946671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n94jj,Uid:c4b66489-09d1-4b14-b39d-3d94a1253ee6,Namespace:calico-system,Attempt:0,} returns sandbox id \"571235bbdf7a48a43a5d620a96d94a0de53c74ef1782d033f81f36535ade65ad\"" May 16 00:43:42.424039 kubelet[2112]: E0516 00:43:42.424005 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.424039 kubelet[2112]: W0516 00:43:42.424030 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.424039 kubelet[2112]: E0516 00:43:42.424049 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.424244 kubelet[2112]: E0516 00:43:42.424231 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.424244 kubelet[2112]: W0516 00:43:42.424242 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.424299 kubelet[2112]: E0516 00:43:42.424256 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.424430 kubelet[2112]: E0516 00:43:42.424409 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.424430 kubelet[2112]: W0516 00:43:42.424420 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.424497 kubelet[2112]: E0516 00:43:42.424433 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.424603 kubelet[2112]: E0516 00:43:42.424584 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.424603 kubelet[2112]: W0516 00:43:42.424596 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.424659 kubelet[2112]: E0516 00:43:42.424608 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.424767 kubelet[2112]: E0516 00:43:42.424756 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.424795 kubelet[2112]: W0516 00:43:42.424766 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.424795 kubelet[2112]: E0516 00:43:42.424775 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.425010 kubelet[2112]: E0516 00:43:42.424993 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.425039 kubelet[2112]: W0516 00:43:42.425011 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.425039 kubelet[2112]: E0516 00:43:42.425030 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.425194 kubelet[2112]: E0516 00:43:42.425184 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.425194 kubelet[2112]: W0516 00:43:42.425194 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.425240 kubelet[2112]: E0516 00:43:42.425203 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.425338 kubelet[2112]: E0516 00:43:42.425329 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.425361 kubelet[2112]: W0516 00:43:42.425338 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.425388 kubelet[2112]: E0516 00:43:42.425378 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.425474 kubelet[2112]: E0516 00:43:42.425464 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.425505 kubelet[2112]: W0516 00:43:42.425474 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.425533 kubelet[2112]: E0516 00:43:42.425511 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.425635 kubelet[2112]: E0516 00:43:42.425624 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.425659 kubelet[2112]: W0516 00:43:42.425635 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.425659 kubelet[2112]: E0516 00:43:42.425647 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.425789 kubelet[2112]: E0516 00:43:42.425779 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.425812 kubelet[2112]: W0516 00:43:42.425790 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.425837 kubelet[2112]: E0516 00:43:42.425809 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.425916 kubelet[2112]: E0516 00:43:42.425905 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.425943 kubelet[2112]: W0516 00:43:42.425916 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.425943 kubelet[2112]: E0516 00:43:42.425934 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.426066 kubelet[2112]: E0516 00:43:42.426056 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.426102 kubelet[2112]: W0516 00:43:42.426066 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.426102 kubelet[2112]: E0516 00:43:42.426085 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.426222 kubelet[2112]: E0516 00:43:42.426211 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.426268 kubelet[2112]: W0516 00:43:42.426221 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.426268 kubelet[2112]: E0516 00:43:42.426234 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.426386 kubelet[2112]: E0516 00:43:42.426376 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.426415 kubelet[2112]: W0516 00:43:42.426386 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.426415 kubelet[2112]: E0516 00:43:42.426398 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.426545 kubelet[2112]: E0516 00:43:42.426534 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.426545 kubelet[2112]: W0516 00:43:42.426545 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.426617 kubelet[2112]: E0516 00:43:42.426556 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.426723 kubelet[2112]: E0516 00:43:42.426712 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.426748 kubelet[2112]: W0516 00:43:42.426723 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.426748 kubelet[2112]: E0516 00:43:42.426735 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.426905 kubelet[2112]: E0516 00:43:42.426895 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.426933 kubelet[2112]: W0516 00:43:42.426906 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.426933 kubelet[2112]: E0516 00:43:42.426917 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.427178 kubelet[2112]: E0516 00:43:42.427155 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.427178 kubelet[2112]: W0516 00:43:42.427170 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.427237 kubelet[2112]: E0516 00:43:42.427212 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.427350 kubelet[2112]: E0516 00:43:42.427302 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.427350 kubelet[2112]: W0516 00:43:42.427312 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.427350 kubelet[2112]: E0516 00:43:42.427331 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.429260 kubelet[2112]: E0516 00:43:42.427432 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.429260 kubelet[2112]: W0516 00:43:42.427439 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.429260 kubelet[2112]: E0516 00:43:42.427447 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.429260 kubelet[2112]: E0516 00:43:42.427611 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.429260 kubelet[2112]: W0516 00:43:42.427619 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.429260 kubelet[2112]: E0516 00:43:42.427626 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.429260 kubelet[2112]: E0516 00:43:42.427790 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.429260 kubelet[2112]: W0516 00:43:42.427797 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.429260 kubelet[2112]: E0516 00:43:42.427804 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.429260 kubelet[2112]: E0516 00:43:42.427930 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.429681 kubelet[2112]: W0516 00:43:42.427938 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.429681 kubelet[2112]: E0516 00:43:42.427945 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.429681 kubelet[2112]: E0516 00:43:42.428105 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.429681 kubelet[2112]: W0516 00:43:42.428113 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.429681 kubelet[2112]: E0516 00:43:42.428122 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.438690 kubelet[2112]: E0516 00:43:42.438666 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:42.438813 kubelet[2112]: W0516 00:43:42.438797 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:42.438893 kubelet[2112]: E0516 00:43:42.438880 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:42.618000 audit[2773]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2773 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:42.618000 audit[2773]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe7c6b0f0 a2=0 a3=1 items=0 ppid=2257 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:42.618000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:42.629000 audit[2773]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2773 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:42.629000 audit[2773]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe7c6b0f0 a2=0 a3=1 items=0 ppid=2257 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:42.629000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:43.109143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1923625880.mount: Deactivated successfully. May 16 00:43:43.768369 kubelet[2112]: E0516 00:43:43.768020 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2kkd" podUID="00bfb5c0-df56-4053-a2df-e7346d66a58a" May 16 00:43:43.808403 env[1326]: time="2025-05-16T00:43:43.808346628Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:43.809701 env[1326]: time="2025-05-16T00:43:43.809668002Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:43.811033 env[1326]: time="2025-05-16T00:43:43.810999856Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:43.812452 env[1326]: time="2025-05-16T00:43:43.812414108Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:43.812912 env[1326]: time="2025-05-16T00:43:43.812886539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\"" May 16 00:43:43.815166 env[1326]: time="2025-05-16T00:43:43.814867500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 16 00:43:43.835481 env[1326]: time="2025-05-16T00:43:43.835436136Z" level=info msg="CreateContainer within sandbox \"bd0da2123a91be5180cde2dd75c21022792c3958e8cb4182cb96485259fe431a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 16 00:43:43.844615 env[1326]: time="2025-05-16T00:43:43.844576956Z" level=info msg="CreateContainer within sandbox \"bd0da2123a91be5180cde2dd75c21022792c3958e8cb4182cb96485259fe431a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"88f12523fda6c3fc4b4ceab2587b2392d70c9b61890c1a0eb48c953349260df8\"" May 16 00:43:43.845311 env[1326]: time="2025-05-16T00:43:43.845276582Z" level=info msg="StartContainer for \"88f12523fda6c3fc4b4ceab2587b2392d70c9b61890c1a0eb48c953349260df8\"" May 16 00:43:43.934103 env[1326]: time="2025-05-16T00:43:43.934053678Z" level=info msg="StartContainer for \"88f12523fda6c3fc4b4ceab2587b2392d70c9b61890c1a0eb48c953349260df8\" returns successfully" May 16 00:43:44.835822 env[1326]: time="2025-05-16T00:43:44.835773882Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:44.837399 env[1326]: time="2025-05-16T00:43:44.837358613Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:44.838826 env[1326]: time="2025-05-16T00:43:44.838796586Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:44.840430 env[1326]: time="2025-05-16T00:43:44.840395116Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:44.841061 env[1326]: time="2025-05-16T00:43:44.841035543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\"" May 16 00:43:44.844150 env[1326]: time="2025-05-16T00:43:44.844099646Z" level=info msg="CreateContainer within sandbox \"571235bbdf7a48a43a5d620a96d94a0de53c74ef1782d033f81f36535ade65ad\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 16 00:43:44.857508 env[1326]: time="2025-05-16T00:43:44.857458635Z" level=info msg="CreateContainer within sandbox \"571235bbdf7a48a43a5d620a96d94a0de53c74ef1782d033f81f36535ade65ad\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1e326fea904043cd427397fc18c6984c3ce196cc37254a8daf802dc0a37839fe\"" May 16 00:43:44.858267 env[1326]: time="2025-05-16T00:43:44.858237260Z" level=info msg="StartContainer for \"1e326fea904043cd427397fc18c6984c3ce196cc37254a8daf802dc0a37839fe\"" May 16 00:43:44.884008 kubelet[2112]: E0516 00:43:44.882128 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:44.944620 kubelet[2112]: E0516 00:43:44.944591 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.944907 kubelet[2112]: W0516 00:43:44.944783 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.944907 kubelet[2112]: E0516 00:43:44.944812 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.945197 kubelet[2112]: E0516 00:43:44.945182 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.945270 kubelet[2112]: W0516 00:43:44.945258 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.945337 kubelet[2112]: E0516 00:43:44.945326 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.946058 kubelet[2112]: E0516 00:43:44.946042 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.946154 kubelet[2112]: W0516 00:43:44.946141 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.946212 kubelet[2112]: E0516 00:43:44.946201 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.946554 kubelet[2112]: E0516 00:43:44.946426 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.946554 kubelet[2112]: W0516 00:43:44.946448 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.946554 kubelet[2112]: E0516 00:43:44.946460 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.946817 kubelet[2112]: E0516 00:43:44.946727 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.946817 kubelet[2112]: W0516 00:43:44.946737 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.946817 kubelet[2112]: E0516 00:43:44.946747 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.947069 kubelet[2112]: E0516 00:43:44.946969 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.947069 kubelet[2112]: W0516 00:43:44.946980 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.947069 kubelet[2112]: E0516 00:43:44.946990 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.947328 kubelet[2112]: E0516 00:43:44.947225 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.947328 kubelet[2112]: W0516 00:43:44.947235 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.947328 kubelet[2112]: E0516 00:43:44.947245 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.947610 kubelet[2112]: E0516 00:43:44.947488 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.947610 kubelet[2112]: W0516 00:43:44.947498 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.947610 kubelet[2112]: E0516 00:43:44.947509 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.947866 kubelet[2112]: E0516 00:43:44.947773 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.947866 kubelet[2112]: W0516 00:43:44.947785 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.947866 kubelet[2112]: E0516 00:43:44.947794 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.948028 kubelet[2112]: E0516 00:43:44.948017 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.948087 kubelet[2112]: W0516 00:43:44.948076 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.948150 kubelet[2112]: E0516 00:43:44.948138 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.948423 kubelet[2112]: E0516 00:43:44.948410 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.948522 kubelet[2112]: W0516 00:43:44.948509 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.948584 kubelet[2112]: E0516 00:43:44.948572 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.948862 kubelet[2112]: E0516 00:43:44.948849 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.948938 kubelet[2112]: W0516 00:43:44.948926 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.949024 kubelet[2112]: E0516 00:43:44.949011 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.949318 kubelet[2112]: E0516 00:43:44.949303 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.949411 kubelet[2112]: W0516 00:43:44.949398 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.949475 kubelet[2112]: E0516 00:43:44.949464 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.949754 kubelet[2112]: E0516 00:43:44.949738 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.949841 kubelet[2112]: W0516 00:43:44.949828 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.949909 kubelet[2112]: E0516 00:43:44.949897 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.950202 kubelet[2112]: E0516 00:43:44.950190 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.950349 kubelet[2112]: W0516 00:43:44.950334 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.950412 kubelet[2112]: E0516 00:43:44.950401 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.950740 kubelet[2112]: E0516 00:43:44.950727 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.950823 kubelet[2112]: W0516 00:43:44.950811 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.950888 kubelet[2112]: E0516 00:43:44.950877 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.951237 kubelet[2112]: E0516 00:43:44.951218 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.951237 kubelet[2112]: W0516 00:43:44.951237 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.951386 kubelet[2112]: E0516 00:43:44.951259 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.951479 kubelet[2112]: E0516 00:43:44.951468 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.951523 kubelet[2112]: W0516 00:43:44.951479 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.951523 kubelet[2112]: E0516 00:43:44.951492 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.951675 kubelet[2112]: E0516 00:43:44.951665 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.951716 kubelet[2112]: W0516 00:43:44.951676 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.951716 kubelet[2112]: E0516 00:43:44.951692 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.951898 kubelet[2112]: E0516 00:43:44.951887 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.951941 kubelet[2112]: W0516 00:43:44.951898 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.951941 kubelet[2112]: E0516 00:43:44.951912 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.952299 kubelet[2112]: E0516 00:43:44.952285 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.952426 kubelet[2112]: W0516 00:43:44.952412 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.952514 kubelet[2112]: E0516 00:43:44.952501 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.952566 env[1326]: time="2025-05-16T00:43:44.952520169Z" level=info msg="StartContainer for \"1e326fea904043cd427397fc18c6984c3ce196cc37254a8daf802dc0a37839fe\" returns successfully" May 16 00:43:44.952755 kubelet[2112]: E0516 00:43:44.952735 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.952755 kubelet[2112]: W0516 00:43:44.952752 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.952837 kubelet[2112]: E0516 00:43:44.952765 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.953206 kubelet[2112]: E0516 00:43:44.953191 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.953206 kubelet[2112]: W0516 00:43:44.953205 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.953297 kubelet[2112]: E0516 00:43:44.953219 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.953478 kubelet[2112]: E0516 00:43:44.953462 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.953478 kubelet[2112]: W0516 00:43:44.953476 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.953549 kubelet[2112]: E0516 00:43:44.953489 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.953723 kubelet[2112]: E0516 00:43:44.953709 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.953771 kubelet[2112]: W0516 00:43:44.953725 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.953771 kubelet[2112]: E0516 00:43:44.953739 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.955025 kubelet[2112]: E0516 00:43:44.953898 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.955025 kubelet[2112]: W0516 00:43:44.953911 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.955025 kubelet[2112]: E0516 00:43:44.953993 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.955025 kubelet[2112]: E0516 00:43:44.954111 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.955025 kubelet[2112]: W0516 00:43:44.954118 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.955025 kubelet[2112]: E0516 00:43:44.954126 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.955025 kubelet[2112]: E0516 00:43:44.954286 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.955025 kubelet[2112]: W0516 00:43:44.954294 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.955025 kubelet[2112]: E0516 00:43:44.954303 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.955025 kubelet[2112]: E0516 00:43:44.954464 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.955312 kubelet[2112]: W0516 00:43:44.954472 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.955312 kubelet[2112]: E0516 00:43:44.954482 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.955312 kubelet[2112]: E0516 00:43:44.954720 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.955312 kubelet[2112]: W0516 00:43:44.954729 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.955312 kubelet[2112]: E0516 00:43:44.954738 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.955513 kubelet[2112]: E0516 00:43:44.955499 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.955577 kubelet[2112]: W0516 00:43:44.955565 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.955746 kubelet[2112]: E0516 00:43:44.955731 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.955878 kubelet[2112]: E0516 00:43:44.955867 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.955957 kubelet[2112]: W0516 00:43:44.955945 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.956043 kubelet[2112]: E0516 00:43:44.956031 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:44.956587 kubelet[2112]: E0516 00:43:44.956570 2112 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:43:44.956680 kubelet[2112]: W0516 00:43:44.956666 2112 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:43:44.956749 kubelet[2112]: E0516 00:43:44.956737 2112 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:43:45.008579 env[1326]: time="2025-05-16T00:43:45.008530563Z" level=info msg="shim disconnected" id=1e326fea904043cd427397fc18c6984c3ce196cc37254a8daf802dc0a37839fe May 16 00:43:45.008803 env[1326]: time="2025-05-16T00:43:45.008784639Z" level=warning msg="cleaning up after shim disconnected" id=1e326fea904043cd427397fc18c6984c3ce196cc37254a8daf802dc0a37839fe namespace=k8s.io May 16 00:43:45.008862 env[1326]: time="2025-05-16T00:43:45.008850238Z" level=info msg="cleaning up dead shim" May 16 00:43:45.015339 env[1326]: time="2025-05-16T00:43:45.015288722Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:43:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2900 runtime=io.containerd.runc.v2\n" May 16 00:43:45.025878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e326fea904043cd427397fc18c6984c3ce196cc37254a8daf802dc0a37839fe-rootfs.mount: Deactivated successfully. May 16 00:43:45.768334 kubelet[2112]: E0516 00:43:45.768284 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2kkd" podUID="00bfb5c0-df56-4053-a2df-e7346d66a58a" May 16 00:43:45.884198 kubelet[2112]: I0516 00:43:45.884164 2112 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:43:45.884853 kubelet[2112]: E0516 00:43:45.884833 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:45.886194 env[1326]: time="2025-05-16T00:43:45.886161706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 16 00:43:45.905156 kubelet[2112]: I0516 00:43:45.905095 2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54d9df8d8b-mcqzf" podStartSLOduration=3.285486125 podStartE2EDuration="4.905079966s" podCreationTimestamp="2025-05-16 00:43:41 +0000 UTC" firstStartedPulling="2025-05-16 00:43:42.194982344 +0000 UTC m=+22.503604539" lastFinishedPulling="2025-05-16 00:43:43.814576185 +0000 UTC m=+24.123198380" observedRunningTime="2025-05-16 00:43:44.920359413 +0000 UTC m=+25.228981608" watchObservedRunningTime="2025-05-16 00:43:45.905079966 +0000 UTC m=+26.213702121" May 16 00:43:47.767675 kubelet[2112]: E0516 00:43:47.767623 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2kkd" podUID="00bfb5c0-df56-4053-a2df-e7346d66a58a" May 16 00:43:48.301175 kubelet[2112]: I0516 00:43:48.301135 2112 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:43:48.301996 kubelet[2112]: E0516 00:43:48.301645 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:48.361000 audit[2923]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=2923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:48.363404 kernel: kauditd_printk_skb: 8 callbacks suppressed May 16 00:43:48.363458 kernel: audit: type=1325 audit(1747356228.361:294): table=filter:99 family=2 entries=21 op=nft_register_rule pid=2923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:48.361000 audit[2923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffd394960 a2=0 a3=1 items=0 ppid=2257 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:48.367343 kernel: audit: type=1300 audit(1747356228.361:294): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffd394960 a2=0 a3=1 items=0 ppid=2257 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:48.367406 kernel: audit: type=1327 audit(1747356228.361:294): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:48.361000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:48.372000 audit[2923]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:48.372000 audit[2923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffffd394960 a2=0 a3=1 items=0 ppid=2257 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:48.377468 kernel: audit: type=1325 audit(1747356228.372:295): table=nat:100 family=2 entries=19 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:48.377524 kernel: audit: type=1300 audit(1747356228.372:295): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffffd394960 a2=0 a3=1 items=0 ppid=2257 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:48.377543 kernel: audit: type=1327 audit(1747356228.372:295): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:48.372000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:48.589864 env[1326]: time="2025-05-16T00:43:48.589751855Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:48.591172 env[1326]: time="2025-05-16T00:43:48.591139353Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:48.592596 env[1326]: time="2025-05-16T00:43:48.592559050Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:48.594105 env[1326]: time="2025-05-16T00:43:48.594074626Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:48.594590 env[1326]: time="2025-05-16T00:43:48.594557979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\"" May 16 00:43:48.596724 env[1326]: time="2025-05-16T00:43:48.596677825Z" level=info msg="CreateContainer within sandbox \"571235bbdf7a48a43a5d620a96d94a0de53c74ef1782d033f81f36535ade65ad\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 16 00:43:48.609533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1443331079.mount: Deactivated successfully. May 16 00:43:48.612704 env[1326]: time="2025-05-16T00:43:48.612661132Z" level=info msg="CreateContainer within sandbox \"571235bbdf7a48a43a5d620a96d94a0de53c74ef1782d033f81f36535ade65ad\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"99efc7a3c0cadb39700f3116feb35369310ba10655543c3115f9afa95f26a465\"" May 16 00:43:48.614435 env[1326]: time="2025-05-16T00:43:48.614399104Z" level=info msg="StartContainer for \"99efc7a3c0cadb39700f3116feb35369310ba10655543c3115f9afa95f26a465\"" May 16 00:43:48.689738 env[1326]: time="2025-05-16T00:43:48.689691751Z" level=info msg="StartContainer for \"99efc7a3c0cadb39700f3116feb35369310ba10655543c3115f9afa95f26a465\" returns successfully" May 16 00:43:48.891500 kubelet[2112]: E0516 00:43:48.891164 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:49.332131 env[1326]: time="2025-05-16T00:43:49.332022619Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:43:49.348498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99efc7a3c0cadb39700f3116feb35369310ba10655543c3115f9afa95f26a465-rootfs.mount: Deactivated successfully. May 16 00:43:49.351904 kubelet[2112]: I0516 00:43:49.351394 2112 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 16 00:43:49.354634 env[1326]: time="2025-05-16T00:43:49.354583036Z" level=info msg="shim disconnected" id=99efc7a3c0cadb39700f3116feb35369310ba10655543c3115f9afa95f26a465 May 16 00:43:49.354634 env[1326]: time="2025-05-16T00:43:49.354633755Z" level=warning msg="cleaning up after shim disconnected" id=99efc7a3c0cadb39700f3116feb35369310ba10655543c3115f9afa95f26a465 namespace=k8s.io May 16 00:43:49.354820 env[1326]: time="2025-05-16T00:43:49.354643955Z" level=info msg="cleaning up dead shim" May 16 00:43:49.365358 env[1326]: time="2025-05-16T00:43:49.365306873Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:43:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2973 runtime=io.containerd.runc.v2\n" May 16 00:43:49.387763 kubelet[2112]: I0516 00:43:49.387730 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlpng\" (UniqueName: \"kubernetes.io/projected/78479183-5b0e-4e14-9b65-379d830097f9-kube-api-access-rlpng\") pod \"calico-apiserver-7d5c695cc5-gsjpm\" (UID: \"78479183-5b0e-4e14-9b65-379d830097f9\") " pod="calico-apiserver/calico-apiserver-7d5c695cc5-gsjpm" May 16 00:43:49.388030 kubelet[2112]: I0516 00:43:49.388014 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/78479183-5b0e-4e14-9b65-379d830097f9-calico-apiserver-certs\") pod \"calico-apiserver-7d5c695cc5-gsjpm\" (UID: \"78479183-5b0e-4e14-9b65-379d830097f9\") " pod="calico-apiserver/calico-apiserver-7d5c695cc5-gsjpm" May 16 00:43:49.489430 kubelet[2112]: I0516 00:43:49.489357 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-kxr6q\" (UID: \"74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82\") " pod="calico-system/goldmane-8f77d7b6c-kxr6q" May 16 00:43:49.489430 kubelet[2112]: I0516 00:43:49.489426 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt2cs\" (UniqueName: \"kubernetes.io/projected/cabb5372-9b11-40f2-abd0-e05e56644a15-kube-api-access-mt2cs\") pod \"whisker-56577bbdf5-bx2tp\" (UID: \"cabb5372-9b11-40f2-abd0-e05e56644a15\") " pod="calico-system/whisker-56577bbdf5-bx2tp" May 16 00:43:49.489622 kubelet[2112]: I0516 00:43:49.489448 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabb5372-9b11-40f2-abd0-e05e56644a15-whisker-ca-bundle\") pod \"whisker-56577bbdf5-bx2tp\" (UID: \"cabb5372-9b11-40f2-abd0-e05e56644a15\") " pod="calico-system/whisker-56577bbdf5-bx2tp" May 16 00:43:49.489622 kubelet[2112]: I0516 00:43:49.489517 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gvrv\" (UniqueName: \"kubernetes.io/projected/65a002c7-112a-4b0f-8977-55ccbd8ecc6b-kube-api-access-5gvrv\") pod \"coredns-7c65d6cfc9-f65v2\" (UID: \"65a002c7-112a-4b0f-8977-55ccbd8ecc6b\") " pod="kube-system/coredns-7c65d6cfc9-f65v2" May 16 00:43:49.489622 kubelet[2112]: I0516 00:43:49.489577 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe218a81-50db-479d-bb87-757c8c52f897-config-volume\") pod \"coredns-7c65d6cfc9-nlbz8\" (UID: \"fe218a81-50db-479d-bb87-757c8c52f897\") " pod="kube-system/coredns-7c65d6cfc9-nlbz8" May 16 00:43:49.489622 kubelet[2112]: I0516 00:43:49.489616 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65a002c7-112a-4b0f-8977-55ccbd8ecc6b-config-volume\") pod \"coredns-7c65d6cfc9-f65v2\" (UID: \"65a002c7-112a-4b0f-8977-55ccbd8ecc6b\") " pod="kube-system/coredns-7c65d6cfc9-f65v2" May 16 00:43:49.489722 kubelet[2112]: I0516 00:43:49.489648 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz7jz\" (UniqueName: \"kubernetes.io/projected/74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82-kube-api-access-gz7jz\") pod \"goldmane-8f77d7b6c-kxr6q\" (UID: \"74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82\") " pod="calico-system/goldmane-8f77d7b6c-kxr6q" May 16 00:43:49.489722 kubelet[2112]: I0516 00:43:49.489664 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4qcg\" (UniqueName: \"kubernetes.io/projected/fe218a81-50db-479d-bb87-757c8c52f897-kube-api-access-r4qcg\") pod \"coredns-7c65d6cfc9-nlbz8\" (UID: \"fe218a81-50db-479d-bb87-757c8c52f897\") " pod="kube-system/coredns-7c65d6cfc9-nlbz8" May 16 00:43:49.489722 kubelet[2112]: I0516 00:43:49.489681 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b53c1672-73dc-401f-b6e2-787097ef7c61-tigera-ca-bundle\") pod \"calico-kube-controllers-5648d477c5-hhnfn\" (UID: \"b53c1672-73dc-401f-b6e2-787097ef7c61\") " pod="calico-system/calico-kube-controllers-5648d477c5-hhnfn" May 16 00:43:49.489843 kubelet[2112]: I0516 00:43:49.489795 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6czk\" (UniqueName: \"kubernetes.io/projected/b53c1672-73dc-401f-b6e2-787097ef7c61-kube-api-access-q6czk\") pod \"calico-kube-controllers-5648d477c5-hhnfn\" (UID: \"b53c1672-73dc-401f-b6e2-787097ef7c61\") " pod="calico-system/calico-kube-controllers-5648d477c5-hhnfn" May 16 00:43:49.489886 kubelet[2112]: I0516 00:43:49.489856 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cabb5372-9b11-40f2-abd0-e05e56644a15-whisker-backend-key-pair\") pod \"whisker-56577bbdf5-bx2tp\" (UID: \"cabb5372-9b11-40f2-abd0-e05e56644a15\") " pod="calico-system/whisker-56577bbdf5-bx2tp" May 16 00:43:49.489886 kubelet[2112]: I0516 00:43:49.489874 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-kxr6q\" (UID: \"74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82\") " pod="calico-system/goldmane-8f77d7b6c-kxr6q" May 16 00:43:49.489937 kubelet[2112]: I0516 00:43:49.489891 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5fxm\" (UniqueName: \"kubernetes.io/projected/0d68fd98-e1d9-442d-9586-2d60cebfa71e-kube-api-access-j5fxm\") pod \"calico-apiserver-7d5c695cc5-mf7nf\" (UID: \"0d68fd98-e1d9-442d-9586-2d60cebfa71e\") " pod="calico-apiserver/calico-apiserver-7d5c695cc5-mf7nf" May 16 00:43:49.489937 kubelet[2112]: I0516 00:43:49.489920 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82-config\") pod \"goldmane-8f77d7b6c-kxr6q\" (UID: \"74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82\") " pod="calico-system/goldmane-8f77d7b6c-kxr6q" May 16 00:43:49.490073 kubelet[2112]: I0516 00:43:49.490002 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d68fd98-e1d9-442d-9586-2d60cebfa71e-calico-apiserver-certs\") pod \"calico-apiserver-7d5c695cc5-mf7nf\" (UID: \"0d68fd98-e1d9-442d-9586-2d60cebfa71e\") " pod="calico-apiserver/calico-apiserver-7d5c695cc5-mf7nf" May 16 00:43:49.686560 env[1326]: time="2025-05-16T00:43:49.686232187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5c695cc5-gsjpm,Uid:78479183-5b0e-4e14-9b65-379d830097f9,Namespace:calico-apiserver,Attempt:0,}" May 16 00:43:49.691042 env[1326]: time="2025-05-16T00:43:49.691005155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5648d477c5-hhnfn,Uid:b53c1672-73dc-401f-b6e2-787097ef7c61,Namespace:calico-system,Attempt:0,}" May 16 00:43:49.695683 env[1326]: time="2025-05-16T00:43:49.695649364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56577bbdf5-bx2tp,Uid:cabb5372-9b11-40f2-abd0-e05e56644a15,Namespace:calico-system,Attempt:0,}" May 16 00:43:49.697505 kubelet[2112]: E0516 00:43:49.697477 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:49.699144 env[1326]: time="2025-05-16T00:43:49.698000968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nlbz8,Uid:fe218a81-50db-479d-bb87-757c8c52f897,Namespace:kube-system,Attempt:0,}" May 16 00:43:49.704610 env[1326]: time="2025-05-16T00:43:49.704577388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-kxr6q,Uid:74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82,Namespace:calico-system,Attempt:0,}" May 16 00:43:49.704848 env[1326]: time="2025-05-16T00:43:49.704821945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5c695cc5-mf7nf,Uid:0d68fd98-e1d9-442d-9586-2d60cebfa71e,Namespace:calico-apiserver,Attempt:0,}" May 16 00:43:49.713350 kubelet[2112]: E0516 00:43:49.710642 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:49.715041 env[1326]: time="2025-05-16T00:43:49.711132808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f65v2,Uid:65a002c7-112a-4b0f-8977-55ccbd8ecc6b,Namespace:kube-system,Attempt:0,}" May 16 00:43:49.772716 env[1326]: time="2025-05-16T00:43:49.771692967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d2kkd,Uid:00bfb5c0-df56-4053-a2df-e7346d66a58a,Namespace:calico-system,Attempt:0,}" May 16 00:43:49.906211 env[1326]: time="2025-05-16T00:43:49.906095961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 16 00:43:49.973997 env[1326]: time="2025-05-16T00:43:49.973850889Z" level=error msg="Failed to destroy network for sandbox \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:49.974260 env[1326]: time="2025-05-16T00:43:49.974228843Z" level=error msg="encountered an error cleaning up failed sandbox \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:49.974311 env[1326]: time="2025-05-16T00:43:49.974277083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5c695cc5-gsjpm,Uid:78479183-5b0e-4e14-9b65-379d830097f9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:49.974767 kubelet[2112]: E0516 00:43:49.974716 2112 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:49.975419 kubelet[2112]: E0516 00:43:49.975381 2112 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d5c695cc5-gsjpm" May 16 00:43:49.976073 kubelet[2112]: E0516 00:43:49.976043 2112 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d5c695cc5-gsjpm" May 16 00:43:49.976154 kubelet[2112]: E0516 00:43:49.976110 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d5c695cc5-gsjpm_calico-apiserver(78479183-5b0e-4e14-9b65-379d830097f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d5c695cc5-gsjpm_calico-apiserver(78479183-5b0e-4e14-9b65-379d830097f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d5c695cc5-gsjpm" podUID="78479183-5b0e-4e14-9b65-379d830097f9" May 16 00:43:49.982349 env[1326]: time="2025-05-16T00:43:49.982291281Z" level=error msg="Failed to destroy network for sandbox \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:49.982689 env[1326]: time="2025-05-16T00:43:49.982656115Z" level=error msg="encountered an error cleaning up failed sandbox \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:49.982736 env[1326]: time="2025-05-16T00:43:49.982710794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5648d477c5-hhnfn,Uid:b53c1672-73dc-401f-b6e2-787097ef7c61,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:49.982970 kubelet[2112]: E0516 00:43:49.982924 2112 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:49.983024 kubelet[2112]: E0516 00:43:49.983004 2112 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5648d477c5-hhnfn" May 16 00:43:49.983054 kubelet[2112]: E0516 00:43:49.983023 2112 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5648d477c5-hhnfn" May 16 00:43:49.983101 kubelet[2112]: E0516 00:43:49.983072 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5648d477c5-hhnfn_calico-system(b53c1672-73dc-401f-b6e2-787097ef7c61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5648d477c5-hhnfn_calico-system(b53c1672-73dc-401f-b6e2-787097ef7c61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5648d477c5-hhnfn" podUID="b53c1672-73dc-401f-b6e2-787097ef7c61" May 16 00:43:49.991133 env[1326]: time="2025-05-16T00:43:49.991077667Z" level=error msg="Failed to destroy network for sandbox \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:49.991488 env[1326]: time="2025-05-16T00:43:49.991437262Z" level=error msg="encountered an error cleaning up failed sandbox \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:49.991553 env[1326]: time="2025-05-16T00:43:49.991493701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56577bbdf5-bx2tp,Uid:cabb5372-9b11-40f2-abd0-e05e56644a15,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:49.991739 kubelet[2112]: E0516 00:43:49.991704 2112 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:49.991795 kubelet[2112]: E0516 00:43:49.991759 2112 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56577bbdf5-bx2tp" May 16 00:43:49.991795 kubelet[2112]: E0516 00:43:49.991781 2112 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56577bbdf5-bx2tp" May 16 00:43:49.991863 kubelet[2112]: E0516 00:43:49.991821 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56577bbdf5-bx2tp_calico-system(cabb5372-9b11-40f2-abd0-e05e56644a15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56577bbdf5-bx2tp_calico-system(cabb5372-9b11-40f2-abd0-e05e56644a15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56577bbdf5-bx2tp" podUID="cabb5372-9b11-40f2-abd0-e05e56644a15" May 16 00:43:50.016249 env[1326]: time="2025-05-16T00:43:50.016182894Z" level=error msg="Failed to destroy network for sandbox \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.016592 env[1326]: time="2025-05-16T00:43:50.016563408Z" level=error msg="encountered an error cleaning up failed sandbox \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.016638 env[1326]: time="2025-05-16T00:43:50.016611607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-kxr6q,Uid:74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.016899 kubelet[2112]: E0516 00:43:50.016854 2112 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.016958 kubelet[2112]: E0516 00:43:50.016909 2112 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-kxr6q" May 16 00:43:50.016958 kubelet[2112]: E0516 00:43:50.016945 2112 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-kxr6q" May 16 00:43:50.018495 kubelet[2112]: E0516 00:43:50.017002 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-kxr6q_calico-system(74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-kxr6q_calico-system(74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-kxr6q" podUID="74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82" May 16 00:43:50.018726 env[1326]: time="2025-05-16T00:43:50.018687257Z" level=error msg="Failed to destroy network for sandbox \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.019519 env[1326]: time="2025-05-16T00:43:50.019472606Z" level=error msg="encountered an error cleaning up failed sandbox \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.019664 env[1326]: time="2025-05-16T00:43:50.019634523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nlbz8,Uid:fe218a81-50db-479d-bb87-757c8c52f897,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.019972 kubelet[2112]: E0516 00:43:50.019908 2112 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.020047 kubelet[2112]: E0516 00:43:50.019989 2112 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nlbz8" May 16 00:43:50.020047 kubelet[2112]: E0516 00:43:50.020012 2112 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nlbz8" May 16 00:43:50.020107 kubelet[2112]: E0516 00:43:50.020079 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-nlbz8_kube-system(fe218a81-50db-479d-bb87-757c8c52f897)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-nlbz8_kube-system(fe218a81-50db-479d-bb87-757c8c52f897)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nlbz8" podUID="fe218a81-50db-479d-bb87-757c8c52f897" May 16 00:43:50.027103 env[1326]: time="2025-05-16T00:43:50.027057975Z" level=error msg="Failed to destroy network for sandbox \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.027447 env[1326]: time="2025-05-16T00:43:50.027415969Z" level=error msg="encountered an error cleaning up failed sandbox \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.027500 env[1326]: time="2025-05-16T00:43:50.027473048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5c695cc5-mf7nf,Uid:0d68fd98-e1d9-442d-9586-2d60cebfa71e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.027694 kubelet[2112]: E0516 00:43:50.027661 2112 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.027742 kubelet[2112]: E0516 00:43:50.027708 2112 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d5c695cc5-mf7nf" May 16 00:43:50.027742 kubelet[2112]: E0516 00:43:50.027734 2112 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d5c695cc5-mf7nf" May 16 00:43:50.027800 kubelet[2112]: E0516 00:43:50.027770 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d5c695cc5-mf7nf_calico-apiserver(0d68fd98-e1d9-442d-9586-2d60cebfa71e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d5c695cc5-mf7nf_calico-apiserver(0d68fd98-e1d9-442d-9586-2d60cebfa71e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d5c695cc5-mf7nf" podUID="0d68fd98-e1d9-442d-9586-2d60cebfa71e" May 16 00:43:50.029230 env[1326]: time="2025-05-16T00:43:50.029190023Z" level=error msg="Failed to destroy network for sandbox \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.029617 env[1326]: time="2025-05-16T00:43:50.029582058Z" level=error msg="encountered an error cleaning up failed sandbox \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.029662 env[1326]: time="2025-05-16T00:43:50.029638097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d2kkd,Uid:00bfb5c0-df56-4053-a2df-e7346d66a58a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.029826 kubelet[2112]: E0516 00:43:50.029788 2112 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.029870 kubelet[2112]: E0516 00:43:50.029839 2112 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d2kkd" May 16 00:43:50.029870 kubelet[2112]: E0516 00:43:50.029858 2112 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d2kkd" May 16 00:43:50.029935 kubelet[2112]: E0516 00:43:50.029899 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d2kkd_calico-system(00bfb5c0-df56-4053-a2df-e7346d66a58a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d2kkd_calico-system(00bfb5c0-df56-4053-a2df-e7346d66a58a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d2kkd" podUID="00bfb5c0-df56-4053-a2df-e7346d66a58a" May 16 00:43:50.041839 env[1326]: time="2025-05-16T00:43:50.041763239Z" level=error msg="Failed to destroy network for sandbox \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.042145 env[1326]: time="2025-05-16T00:43:50.042104994Z" level=error msg="encountered an error cleaning up failed sandbox \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.042185 env[1326]: time="2025-05-16T00:43:50.042152874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f65v2,Uid:65a002c7-112a-4b0f-8977-55ccbd8ecc6b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.042392 kubelet[2112]: E0516 00:43:50.042352 2112 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.042438 kubelet[2112]: E0516 00:43:50.042409 2112 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-f65v2" May 16 00:43:50.042466 kubelet[2112]: E0516 00:43:50.042438 2112 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-f65v2" May 16 00:43:50.042526 kubelet[2112]: E0516 00:43:50.042476 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-f65v2_kube-system(65a002c7-112a-4b0f-8977-55ccbd8ecc6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-f65v2_kube-system(65a002c7-112a-4b0f-8977-55ccbd8ecc6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-f65v2" podUID="65a002c7-112a-4b0f-8977-55ccbd8ecc6b" May 16 00:43:50.607146 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124-shm.mount: Deactivated successfully. May 16 00:43:50.908013 kubelet[2112]: I0516 00:43:50.907982 2112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" May 16 00:43:50.909069 env[1326]: time="2025-05-16T00:43:50.909029864Z" level=info msg="StopPodSandbox for \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\"" May 16 00:43:50.910898 kubelet[2112]: I0516 00:43:50.910865 2112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" May 16 00:43:50.911406 env[1326]: time="2025-05-16T00:43:50.911379470Z" level=info msg="StopPodSandbox for \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\"" May 16 00:43:50.914164 kubelet[2112]: I0516 00:43:50.914139 2112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" May 16 00:43:50.915136 env[1326]: time="2025-05-16T00:43:50.915099736Z" level=info msg="StopPodSandbox for \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\"" May 16 00:43:50.916332 kubelet[2112]: I0516 00:43:50.916299 2112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" May 16 00:43:50.916794 env[1326]: time="2025-05-16T00:43:50.916770631Z" level=info msg="StopPodSandbox for \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\"" May 16 00:43:50.918646 kubelet[2112]: I0516 00:43:50.918616 2112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" May 16 00:43:50.920215 env[1326]: time="2025-05-16T00:43:50.920180181Z" level=info msg="StopPodSandbox for \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\"" May 16 00:43:50.921395 kubelet[2112]: I0516 00:43:50.921043 2112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" May 16 00:43:50.921629 env[1326]: time="2025-05-16T00:43:50.921600800Z" level=info msg="StopPodSandbox for \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\"" May 16 00:43:50.922846 kubelet[2112]: I0516 00:43:50.922824 2112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" May 16 00:43:50.923446 env[1326]: time="2025-05-16T00:43:50.923419854Z" level=info msg="StopPodSandbox for \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\"" May 16 00:43:50.924761 kubelet[2112]: I0516 00:43:50.924729 2112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" May 16 00:43:50.925353 env[1326]: time="2025-05-16T00:43:50.925305586Z" level=info msg="StopPodSandbox for \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\"" May 16 00:43:50.953167 env[1326]: time="2025-05-16T00:43:50.953102179Z" level=error msg="StopPodSandbox for \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\" failed" error="failed to destroy network for sandbox \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.953383 kubelet[2112]: E0516 00:43:50.953334 2112 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" May 16 00:43:50.953699 kubelet[2112]: E0516 00:43:50.953408 2112 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124"} May 16 00:43:50.953735 kubelet[2112]: E0516 00:43:50.953720 2112 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"78479183-5b0e-4e14-9b65-379d830097f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 16 00:43:50.953807 kubelet[2112]: E0516 00:43:50.953744 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"78479183-5b0e-4e14-9b65-379d830097f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d5c695cc5-gsjpm" podUID="78479183-5b0e-4e14-9b65-379d830097f9" May 16 00:43:50.964934 env[1326]: time="2025-05-16T00:43:50.964865807Z" level=error msg="StopPodSandbox for \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\" failed" error="failed to destroy network for sandbox \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.965145 kubelet[2112]: E0516 00:43:50.965100 2112 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" May 16 00:43:50.965205 kubelet[2112]: E0516 00:43:50.965160 2112 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5"} May 16 00:43:50.965205 kubelet[2112]: E0516 00:43:50.965203 2112 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cabb5372-9b11-40f2-abd0-e05e56644a15\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 16 00:43:50.965296 kubelet[2112]: E0516 00:43:50.965225 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cabb5372-9b11-40f2-abd0-e05e56644a15\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56577bbdf5-bx2tp" podUID="cabb5372-9b11-40f2-abd0-e05e56644a15" May 16 00:43:50.969143 env[1326]: time="2025-05-16T00:43:50.969087945Z" level=error msg="StopPodSandbox for \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\" failed" error="failed to destroy network for sandbox \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.969490 kubelet[2112]: E0516 00:43:50.969448 2112 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" May 16 00:43:50.969564 kubelet[2112]: E0516 00:43:50.969546 2112 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34"} May 16 00:43:50.969601 kubelet[2112]: E0516 00:43:50.969581 2112 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00bfb5c0-df56-4053-a2df-e7346d66a58a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 16 00:43:50.969652 kubelet[2112]: E0516 00:43:50.969601 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00bfb5c0-df56-4053-a2df-e7346d66a58a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d2kkd" podUID="00bfb5c0-df56-4053-a2df-e7346d66a58a" May 16 00:43:50.977632 env[1326]: time="2025-05-16T00:43:50.977581341Z" level=error msg="StopPodSandbox for \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\" failed" error="failed to destroy network for sandbox \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.977831 kubelet[2112]: E0516 00:43:50.977785 2112 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" May 16 00:43:50.978131 kubelet[2112]: E0516 00:43:50.977831 2112 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618"} May 16 00:43:50.978131 kubelet[2112]: E0516 00:43:50.977859 2112 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 16 00:43:50.978131 kubelet[2112]: E0516 00:43:50.977877 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-kxr6q" podUID="74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82" May 16 00:43:50.988607 env[1326]: time="2025-05-16T00:43:50.988129187Z" level=error msg="StopPodSandbox for \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\" failed" error="failed to destroy network for sandbox \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.988846 kubelet[2112]: E0516 00:43:50.988801 2112 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" May 16 00:43:50.988915 kubelet[2112]: E0516 00:43:50.988852 2112 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae"} May 16 00:43:50.988915 kubelet[2112]: E0516 00:43:50.988883 2112 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b53c1672-73dc-401f-b6e2-787097ef7c61\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 16 00:43:50.991779 kubelet[2112]: E0516 00:43:50.988902 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b53c1672-73dc-401f-b6e2-787097ef7c61\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5648d477c5-hhnfn" podUID="b53c1672-73dc-401f-b6e2-787097ef7c61" May 16 00:43:50.992144 env[1326]: time="2025-05-16T00:43:50.992103448Z" level=error msg="StopPodSandbox for \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\" failed" error="failed to destroy network for sandbox \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:50.992456 kubelet[2112]: E0516 00:43:50.992381 2112 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" May 16 00:43:50.992679 kubelet[2112]: E0516 00:43:50.992553 2112 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5"} May 16 00:43:50.992798 kubelet[2112]: E0516 00:43:50.992779 2112 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0d68fd98-e1d9-442d-9586-2d60cebfa71e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 16 00:43:50.992871 kubelet[2112]: E0516 00:43:50.992805 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0d68fd98-e1d9-442d-9586-2d60cebfa71e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d5c695cc5-mf7nf" podUID="0d68fd98-e1d9-442d-9586-2d60cebfa71e" May 16 00:43:51.009342 env[1326]: time="2025-05-16T00:43:51.009294121Z" level=error msg="StopPodSandbox for \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\" failed" error="failed to destroy network for sandbox \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:51.009778 kubelet[2112]: E0516 00:43:51.009636 2112 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" May 16 00:43:51.009778 kubelet[2112]: E0516 00:43:51.009688 2112 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5"} May 16 00:43:51.009778 kubelet[2112]: E0516 00:43:51.009730 2112 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe218a81-50db-479d-bb87-757c8c52f897\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 16 00:43:51.009778 kubelet[2112]: E0516 00:43:51.009750 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe218a81-50db-479d-bb87-757c8c52f897\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nlbz8" podUID="fe218a81-50db-479d-bb87-757c8c52f897" May 16 00:43:51.012195 env[1326]: time="2025-05-16T00:43:51.012148721Z" level=error msg="StopPodSandbox for \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\" failed" error="failed to destroy network for sandbox \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:43:51.012472 kubelet[2112]: E0516 00:43:51.012354 2112 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" May 16 00:43:51.012472 kubelet[2112]: E0516 00:43:51.012398 2112 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad"} May 16 00:43:51.012472 kubelet[2112]: E0516 00:43:51.012419 2112 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"65a002c7-112a-4b0f-8977-55ccbd8ecc6b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 16 00:43:51.012472 kubelet[2112]: E0516 00:43:51.012437 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"65a002c7-112a-4b0f-8977-55ccbd8ecc6b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-f65v2" podUID="65a002c7-112a-4b0f-8977-55ccbd8ecc6b" May 16 00:43:54.865976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount312717528.mount: Deactivated successfully. May 16 00:43:55.125038 env[1326]: time="2025-05-16T00:43:55.124919881Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:55.126713 env[1326]: time="2025-05-16T00:43:55.126675980Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:55.129588 env[1326]: time="2025-05-16T00:43:55.129552785Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:55.130779 env[1326]: time="2025-05-16T00:43:55.130743530Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:55.131850 env[1326]: time="2025-05-16T00:43:55.131805197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\"" May 16 00:43:55.150181 env[1326]: time="2025-05-16T00:43:55.150137333Z" level=info msg="CreateContainer within sandbox \"571235bbdf7a48a43a5d620a96d94a0de53c74ef1782d033f81f36535ade65ad\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 16 00:43:55.166858 env[1326]: time="2025-05-16T00:43:55.166799050Z" level=info msg="CreateContainer within sandbox \"571235bbdf7a48a43a5d620a96d94a0de53c74ef1782d033f81f36535ade65ad\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d5bb17f73075f8c9512bf12618a156e07fd87d464525736df8a96a4da4a1d56e\"" May 16 00:43:55.167572 env[1326]: time="2025-05-16T00:43:55.167539681Z" level=info msg="StartContainer for \"d5bb17f73075f8c9512bf12618a156e07fd87d464525736df8a96a4da4a1d56e\"" May 16 00:43:55.258312 env[1326]: time="2025-05-16T00:43:55.258255892Z" level=info msg="StartContainer for \"d5bb17f73075f8c9512bf12618a156e07fd87d464525736df8a96a4da4a1d56e\" returns successfully" May 16 00:43:55.443357 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 16 00:43:55.443483 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 16 00:43:55.564658 env[1326]: time="2025-05-16T00:43:55.564603349Z" level=info msg="StopPodSandbox for \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\"" May 16 00:43:55.822697 env[1326]: 2025-05-16 00:43:55.696 [INFO][3478] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" May 16 00:43:55.822697 env[1326]: 2025-05-16 00:43:55.696 [INFO][3478] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" iface="eth0" netns="/var/run/netns/cni-7bbf5c44-d9a2-e4bf-966c-db8394b08cee" May 16 00:43:55.822697 env[1326]: 2025-05-16 00:43:55.696 [INFO][3478] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" iface="eth0" netns="/var/run/netns/cni-7bbf5c44-d9a2-e4bf-966c-db8394b08cee" May 16 00:43:55.822697 env[1326]: 2025-05-16 00:43:55.698 [INFO][3478] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" iface="eth0" netns="/var/run/netns/cni-7bbf5c44-d9a2-e4bf-966c-db8394b08cee" May 16 00:43:55.822697 env[1326]: 2025-05-16 00:43:55.698 [INFO][3478] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" May 16 00:43:55.822697 env[1326]: 2025-05-16 00:43:55.698 [INFO][3478] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" May 16 00:43:55.822697 env[1326]: 2025-05-16 00:43:55.807 [INFO][3494] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" HandleID="k8s-pod-network.41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" Workload="localhost-k8s-whisker--56577bbdf5--bx2tp-eth0" May 16 00:43:55.822697 env[1326]: 2025-05-16 00:43:55.807 [INFO][3494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:43:55.822697 env[1326]: 2025-05-16 00:43:55.807 [INFO][3494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:43:55.822697 env[1326]: 2025-05-16 00:43:55.818 [WARNING][3494] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" HandleID="k8s-pod-network.41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" Workload="localhost-k8s-whisker--56577bbdf5--bx2tp-eth0" May 16 00:43:55.822697 env[1326]: 2025-05-16 00:43:55.818 [INFO][3494] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" HandleID="k8s-pod-network.41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" Workload="localhost-k8s-whisker--56577bbdf5--bx2tp-eth0" May 16 00:43:55.822697 env[1326]: 2025-05-16 00:43:55.819 [INFO][3494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:43:55.822697 env[1326]: 2025-05-16 00:43:55.821 [INFO][3478] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" May 16 00:43:55.823105 env[1326]: time="2025-05-16T00:43:55.822769674Z" level=info msg="TearDown network for sandbox \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\" successfully" May 16 00:43:55.823105 env[1326]: time="2025-05-16T00:43:55.822804114Z" level=info msg="StopPodSandbox for \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\" returns successfully" May 16 00:43:55.866850 systemd[1]: run-netns-cni\x2d7bbf5c44\x2dd9a2\x2de4bf\x2d966c\x2ddb8394b08cee.mount: Deactivated successfully. May 16 00:43:55.927902 kubelet[2112]: I0516 00:43:55.927850 2112 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabb5372-9b11-40f2-abd0-e05e56644a15-whisker-ca-bundle\") pod \"cabb5372-9b11-40f2-abd0-e05e56644a15\" (UID: \"cabb5372-9b11-40f2-abd0-e05e56644a15\") " May 16 00:43:55.927902 kubelet[2112]: I0516 00:43:55.927899 2112 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt2cs\" (UniqueName: \"kubernetes.io/projected/cabb5372-9b11-40f2-abd0-e05e56644a15-kube-api-access-mt2cs\") pod \"cabb5372-9b11-40f2-abd0-e05e56644a15\" (UID: \"cabb5372-9b11-40f2-abd0-e05e56644a15\") " May 16 00:43:55.928352 kubelet[2112]: I0516 00:43:55.927922 2112 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cabb5372-9b11-40f2-abd0-e05e56644a15-whisker-backend-key-pair\") pod \"cabb5372-9b11-40f2-abd0-e05e56644a15\" (UID: \"cabb5372-9b11-40f2-abd0-e05e56644a15\") " May 16 00:43:55.930274 kubelet[2112]: I0516 00:43:55.930203 2112 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cabb5372-9b11-40f2-abd0-e05e56644a15-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cabb5372-9b11-40f2-abd0-e05e56644a15" (UID: "cabb5372-9b11-40f2-abd0-e05e56644a15"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 00:43:55.931461 kubelet[2112]: I0516 00:43:55.931378 2112 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cabb5372-9b11-40f2-abd0-e05e56644a15-kube-api-access-mt2cs" (OuterVolumeSpecName: "kube-api-access-mt2cs") pod "cabb5372-9b11-40f2-abd0-e05e56644a15" (UID: "cabb5372-9b11-40f2-abd0-e05e56644a15"). InnerVolumeSpecName "kube-api-access-mt2cs". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:43:55.932769 kubelet[2112]: I0516 00:43:55.932729 2112 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cabb5372-9b11-40f2-abd0-e05e56644a15-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cabb5372-9b11-40f2-abd0-e05e56644a15" (UID: "cabb5372-9b11-40f2-abd0-e05e56644a15"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 00:43:55.933116 systemd[1]: var-lib-kubelet-pods-cabb5372\x2d9b11\x2d40f2\x2dabd0\x2de05e56644a15-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmt2cs.mount: Deactivated successfully. May 16 00:43:55.935668 systemd[1]: var-lib-kubelet-pods-cabb5372\x2d9b11\x2d40f2\x2dabd0\x2de05e56644a15-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 16 00:43:55.954679 kubelet[2112]: I0516 00:43:55.954582 2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n94jj" podStartSLOduration=2.221152382 podStartE2EDuration="14.954563784s" podCreationTimestamp="2025-05-16 00:43:41 +0000 UTC" firstStartedPulling="2025-05-16 00:43:42.399000068 +0000 UTC m=+22.707622263" lastFinishedPulling="2025-05-16 00:43:55.13241147 +0000 UTC m=+35.441033665" observedRunningTime="2025-05-16 00:43:55.953655795 +0000 UTC m=+36.262278030" watchObservedRunningTime="2025-05-16 00:43:55.954563784 +0000 UTC m=+36.263185939" May 16 00:43:56.029090 kubelet[2112]: I0516 00:43:56.029041 2112 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabb5372-9b11-40f2-abd0-e05e56644a15-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 16 00:43:56.029241 kubelet[2112]: I0516 00:43:56.029099 2112 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt2cs\" (UniqueName: \"kubernetes.io/projected/cabb5372-9b11-40f2-abd0-e05e56644a15-kube-api-access-mt2cs\") on node \"localhost\" DevicePath \"\"" May 16 00:43:56.029241 kubelet[2112]: I0516 00:43:56.029137 2112 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cabb5372-9b11-40f2-abd0-e05e56644a15-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 16 00:43:56.129431 kubelet[2112]: I0516 00:43:56.129390 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk5t8\" (UniqueName: \"kubernetes.io/projected/083ed115-d2c3-4e81-b1aa-73fbcace47ab-kube-api-access-fk5t8\") pod \"whisker-788588bcd7-c7kkr\" (UID: \"083ed115-d2c3-4e81-b1aa-73fbcace47ab\") " pod="calico-system/whisker-788588bcd7-c7kkr" May 16 00:43:56.129547 kubelet[2112]: I0516 00:43:56.129458 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/083ed115-d2c3-4e81-b1aa-73fbcace47ab-whisker-ca-bundle\") pod \"whisker-788588bcd7-c7kkr\" (UID: \"083ed115-d2c3-4e81-b1aa-73fbcace47ab\") " pod="calico-system/whisker-788588bcd7-c7kkr" May 16 00:43:56.129547 kubelet[2112]: I0516 00:43:56.129495 2112 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/083ed115-d2c3-4e81-b1aa-73fbcace47ab-whisker-backend-key-pair\") pod \"whisker-788588bcd7-c7kkr\" (UID: \"083ed115-d2c3-4e81-b1aa-73fbcace47ab\") " pod="calico-system/whisker-788588bcd7-c7kkr" May 16 00:43:56.301286 env[1326]: time="2025-05-16T00:43:56.301231547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-788588bcd7-c7kkr,Uid:083ed115-d2c3-4e81-b1aa-73fbcace47ab,Namespace:calico-system,Attempt:0,}" May 16 00:43:56.408649 systemd-networkd[1102]: cali5f282276a86: Link UP May 16 00:43:56.410085 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:43:56.410147 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5f282276a86: link becomes ready May 16 00:43:56.410274 systemd-networkd[1102]: cali5f282276a86: Gained carrier May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.329 [INFO][3516] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.343 [INFO][3516] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--788588bcd7--c7kkr-eth0 whisker-788588bcd7- calico-system 083ed115-d2c3-4e81-b1aa-73fbcace47ab 947 0 2025-05-16 00:43:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:788588bcd7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-788588bcd7-c7kkr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5f282276a86 [] [] }} ContainerID="5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" Namespace="calico-system" Pod="whisker-788588bcd7-c7kkr" WorkloadEndpoint="localhost-k8s-whisker--788588bcd7--c7kkr-" May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.343 [INFO][3516] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" Namespace="calico-system" Pod="whisker-788588bcd7-c7kkr" WorkloadEndpoint="localhost-k8s-whisker--788588bcd7--c7kkr-eth0" May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.365 [INFO][3530] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" HandleID="k8s-pod-network.5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" Workload="localhost-k8s-whisker--788588bcd7--c7kkr-eth0" May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.365 [INFO][3530] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" HandleID="k8s-pod-network.5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" Workload="localhost-k8s-whisker--788588bcd7--c7kkr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a76f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-788588bcd7-c7kkr", "timestamp":"2025-05-16 00:43:56.365775264 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.366 [INFO][3530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.366 [INFO][3530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.366 [INFO][3530] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.376 [INFO][3530] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" host="localhost" May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.381 [INFO][3530] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.385 [INFO][3530] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.387 [INFO][3530] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.389 [INFO][3530] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.389 [INFO][3530] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" host="localhost" May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.390 [INFO][3530] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68 May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.394 [INFO][3530] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" host="localhost" May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.398 [INFO][3530] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" host="localhost" May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.398 [INFO][3530] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" host="localhost" May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.399 [INFO][3530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:43:56.426916 env[1326]: 2025-05-16 00:43:56.399 [INFO][3530] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" HandleID="k8s-pod-network.5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" Workload="localhost-k8s-whisker--788588bcd7--c7kkr-eth0" May 16 00:43:56.427513 env[1326]: 2025-05-16 00:43:56.400 [INFO][3516] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" Namespace="calico-system" Pod="whisker-788588bcd7-c7kkr" WorkloadEndpoint="localhost-k8s-whisker--788588bcd7--c7kkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--788588bcd7--c7kkr-eth0", GenerateName:"whisker-788588bcd7-", Namespace:"calico-system", SelfLink:"", UID:"083ed115-d2c3-4e81-b1aa-73fbcace47ab", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"788588bcd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-788588bcd7-c7kkr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5f282276a86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:43:56.427513 env[1326]: 2025-05-16 00:43:56.401 [INFO][3516] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" Namespace="calico-system" Pod="whisker-788588bcd7-c7kkr" WorkloadEndpoint="localhost-k8s-whisker--788588bcd7--c7kkr-eth0" May 16 00:43:56.427513 env[1326]: 2025-05-16 00:43:56.401 [INFO][3516] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f282276a86 ContainerID="5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" Namespace="calico-system" Pod="whisker-788588bcd7-c7kkr" WorkloadEndpoint="localhost-k8s-whisker--788588bcd7--c7kkr-eth0" May 16 00:43:56.427513 env[1326]: 2025-05-16 00:43:56.410 [INFO][3516] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" Namespace="calico-system" Pod="whisker-788588bcd7-c7kkr" WorkloadEndpoint="localhost-k8s-whisker--788588bcd7--c7kkr-eth0" May 16 00:43:56.427513 env[1326]: 2025-05-16 00:43:56.411 [INFO][3516] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" Namespace="calico-system" Pod="whisker-788588bcd7-c7kkr" WorkloadEndpoint="localhost-k8s-whisker--788588bcd7--c7kkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--788588bcd7--c7kkr-eth0", GenerateName:"whisker-788588bcd7-", Namespace:"calico-system", SelfLink:"", UID:"083ed115-d2c3-4e81-b1aa-73fbcace47ab", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"788588bcd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68", Pod:"whisker-788588bcd7-c7kkr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5f282276a86", MAC:"72:ed:03:f3:12:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:43:56.427513 env[1326]: 2025-05-16 00:43:56.421 [INFO][3516] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68" Namespace="calico-system" Pod="whisker-788588bcd7-c7kkr" WorkloadEndpoint="localhost-k8s-whisker--788588bcd7--c7kkr-eth0" May 16 00:43:56.435922 env[1326]: time="2025-05-16T00:43:56.435854995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:43:56.435922 env[1326]: time="2025-05-16T00:43:56.435893915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:43:56.436136 env[1326]: time="2025-05-16T00:43:56.435903915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:43:56.436391 env[1326]: time="2025-05-16T00:43:56.436356309Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68 pid=3554 runtime=io.containerd.runc.v2 May 16 00:43:56.472774 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:43:56.491460 env[1326]: time="2025-05-16T00:43:56.491400738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-788588bcd7-c7kkr,Uid:083ed115-d2c3-4e81-b1aa-73fbcace47ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e32c588184b02f62653852134ff64bb02d6fe0460cdd0dc0cb9a909d798fd68\"" May 16 00:43:56.494231 env[1326]: time="2025-05-16T00:43:56.494196305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 00:43:56.656102 env[1326]: time="2025-05-16T00:43:56.656034232Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 16 00:43:56.656928 env[1326]: time="2025-05-16T00:43:56.656888582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 16 00:43:56.657182 kubelet[2112]: E0516 00:43:56.657125 2112 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:43:56.657237 kubelet[2112]: E0516 00:43:56.657190 2112 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:43:56.659659 kubelet[2112]: E0516 00:43:56.659550 2112 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3c207aa3ffe84b5fbd162d746ecae6bd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fk5t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-788588bcd7-c7kkr_calico-system(083ed115-d2c3-4e81-b1aa-73fbcace47ab): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 16 00:43:56.662352 env[1326]: time="2025-05-16T00:43:56.662316118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 00:43:56.817089 env[1326]: time="2025-05-16T00:43:56.817031889Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 16 00:43:56.817781 env[1326]: time="2025-05-16T00:43:56.817733760Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 16 00:43:56.818038 kubelet[2112]: E0516 00:43:56.817993 2112 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:43:56.818101 kubelet[2112]: E0516 00:43:56.818052 2112 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:43:56.818219 kubelet[2112]: E0516 00:43:56.818170 2112 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk5t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-788588bcd7-c7kkr_calico-system(083ed115-d2c3-4e81-b1aa-73fbcace47ab): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 16 00:43:56.819383 kubelet[2112]: E0516 00:43:56.819346 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-788588bcd7-c7kkr" podUID="083ed115-d2c3-4e81-b1aa-73fbcace47ab" May 16 00:43:56.876000 audit[3646]: AVC avc: denied { write } for pid=3646 comm="tee" name="fd" dev="proc" ino=18322 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 16 00:43:56.876000 audit[3646]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffeb6097d5 a2=241 a3=1b6 items=1 ppid=3605 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:56.882772 kernel: audit: type=1400 audit(1747356236.876:296): avc: denied { write } for pid=3646 comm="tee" name="fd" dev="proc" ino=18322 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 16 00:43:56.882858 kernel: audit: type=1300 audit(1747356236.876:296): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffeb6097d5 a2=241 a3=1b6 items=1 ppid=3605 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:56.876000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 16 00:43:56.883930 kernel: audit: type=1307 audit(1747356236.876:296): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 16 00:43:56.884010 kernel: audit: type=1302 audit(1747356236.876:296): item=0 name="/dev/fd/63" inode=20577 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:43:56.876000 audit: PATH item=0 name="/dev/fd/63" inode=20577 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:43:56.885969 kernel: audit: type=1327 audit(1747356236.876:296): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 16 00:43:56.876000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 16 00:43:56.883000 audit[3623]: AVC avc: denied { write } for pid=3623 comm="tee" name="fd" dev="proc" ino=19209 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 16 00:43:56.889573 kernel: audit: type=1400 audit(1747356236.883:297): avc: denied { write } for pid=3623 comm="tee" name="fd" dev="proc" ino=19209 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 16 00:43:56.883000 audit[3623]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff72247e5 a2=241 a3=1b6 items=1 ppid=3595 pid=3623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:56.894207 kernel: audit: type=1300 audit(1747356236.883:297): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff72247e5 a2=241 a3=1b6 items=1 ppid=3595 pid=3623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:56.883000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 16 00:43:56.897251 kernel: audit: type=1307 audit(1747356236.883:297): cwd="/etc/service/enabled/bird6/log" May 16 00:43:56.897303 kernel: audit: type=1302 audit(1747356236.883:297): item=0 name="/dev/fd/63" inode=19780 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:43:56.883000 audit: PATH item=0 name="/dev/fd/63" inode=19780 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:43:56.883000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 16 00:43:56.900912 kernel: audit: type=1327 audit(1747356236.883:297): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 16 00:43:56.885000 audit[3648]: AVC avc: denied { write } for pid=3648 comm="tee" name="fd" dev="proc" ino=19213 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 16 00:43:56.885000 audit[3648]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc8e327d6 a2=241 a3=1b6 items=1 ppid=3599 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:56.885000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 16 00:43:56.885000 audit: PATH item=0 name="/dev/fd/63" inode=20582 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:43:56.885000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 16 00:43:56.896000 audit[3658]: AVC avc: denied { write } for pid=3658 comm="tee" name="fd" dev="proc" ino=19223 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 16 00:43:56.896000 audit[3658]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd5e3b7e5 a2=241 a3=1b6 items=1 ppid=3597 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:56.896000 audit: CWD cwd="/etc/service/enabled/felix/log" May 16 00:43:56.896000 audit: PATH item=0 name="/dev/fd/63" inode=20587 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:43:56.896000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 16 00:43:56.898000 audit[3667]: AVC avc: denied { write } for pid=3667 comm="tee" name="fd" dev="proc" ino=19227 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 16 00:43:56.898000 audit[3667]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc9ab87e7 a2=241 a3=1b6 items=1 ppid=3604 pid=3667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:56.898000 audit: CWD cwd="/etc/service/enabled/cni/log" May 16 00:43:56.898000 audit: PATH item=0 name="/dev/fd/63" inode=18328 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:43:56.898000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 16 00:43:56.901000 audit[3620]: AVC avc: denied { write } for pid=3620 comm="tee" name="fd" dev="proc" ino=19787 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 16 00:43:56.901000 audit[3620]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc5f537e5 a2=241 a3=1b6 items=1 ppid=3596 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:56.901000 audit: CWD cwd="/etc/service/enabled/confd/log" May 16 00:43:56.901000 audit: PATH item=0 name="/dev/fd/63" inode=19777 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:43:56.901000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 16 00:43:56.905000 audit[3674]: AVC avc: denied { write } for pid=3674 comm="tee" name="fd" dev="proc" ino=19231 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 16 00:43:56.905000 audit[3674]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd82f57e6 a2=241 a3=1b6 items=1 ppid=3603 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:56.905000 audit: CWD cwd="/etc/service/enabled/bird/log" May 16 00:43:56.905000 audit: PATH item=0 name="/dev/fd/63" inode=18331 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:43:56.905000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 16 00:43:56.941058 kubelet[2112]: I0516 00:43:56.940557 2112 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:43:56.942229 kubelet[2112]: E0516 00:43:56.942196 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-788588bcd7-c7kkr" podUID="083ed115-d2c3-4e81-b1aa-73fbcace47ab" May 16 00:43:56.974000 audit[3683]: NETFILTER_CFG table=filter:101 family=2 entries=20 op=nft_register_rule pid=3683 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:56.974000 audit[3683]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc7d3a0c0 a2=0 a3=1 items=0 ppid=2257 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:56.974000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:56.979000 audit[3683]: NETFILTER_CFG table=nat:102 family=2 entries=14 op=nft_register_rule pid=3683 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:43:56.979000 audit[3683]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffc7d3a0c0 a2=0 a3=1 items=0 ppid=2257 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:56.979000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:43:57.065176 systemd[1]: run-containerd-runc-k8s.io-d5bb17f73075f8c9512bf12618a156e07fd87d464525736df8a96a4da4a1d56e-runc.pNtHjO.mount: Deactivated successfully. May 16 00:43:57.086000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.086000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.086000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.086000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.086000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.086000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.086000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.086000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.086000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.086000 audit: BPF prog-id=10 op=LOAD May 16 00:43:57.086000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffc113b78 a2=98 a3=fffffc113b68 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.086000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.086000 audit: BPF prog-id=10 op=UNLOAD May 16 00:43:57.090000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit: BPF prog-id=11 op=LOAD May 16 00:43:57.090000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffc113808 a2=74 a3=95 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.090000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.090000 audit: BPF prog-id=11 op=UNLOAD May 16 00:43:57.090000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.090000 audit: BPF prog-id=12 op=LOAD May 16 00:43:57.090000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffc113868 a2=94 a3=2 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.090000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.091000 audit: BPF prog-id=12 op=UNLOAD May 16 00:43:57.179000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.179000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.179000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.179000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.179000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.179000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.179000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.179000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.179000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.179000 audit: BPF prog-id=13 op=LOAD May 16 00:43:57.179000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffc113828 a2=40 a3=fffffc113858 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.179000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.179000 audit: BPF prog-id=13 op=UNLOAD May 16 00:43:57.179000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.179000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=fffffc113940 a2=50 a3=0 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.179000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.188000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.188000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffc113898 a2=28 a3=fffffc1139c8 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.188000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.188000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.188000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffc1138c8 a2=28 a3=fffffc1139f8 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.188000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.188000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.188000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffc113778 a2=28 a3=fffffc1138a8 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.188000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.188000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.188000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffc1138e8 a2=28 a3=fffffc113a18 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.188000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.188000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.188000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffc1138c8 a2=28 a3=fffffc1139f8 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.188000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.189000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.189000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffc1138b8 a2=28 a3=fffffc1139e8 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.189000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.189000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffc1138e8 a2=28 a3=fffffc113a18 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.189000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.189000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffc1138c8 a2=28 a3=fffffc1139f8 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.189000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.189000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffc1138e8 a2=28 a3=fffffc113a18 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.189000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.189000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffc1138b8 a2=28 a3=fffffc1139e8 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.189000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.189000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffc113938 a2=28 a3=fffffc113a78 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.190000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.190000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffffc113670 a2=50 a3=0 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.190000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.190000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.190000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.190000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.190000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.190000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.190000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.190000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.190000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.190000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.190000 audit: BPF prog-id=14 op=LOAD May 16 00:43:57.190000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffc113678 a2=94 a3=5 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.190000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.191000 audit: BPF prog-id=14 op=UNLOAD May 16 00:43:57.191000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.191000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffffc113780 a2=50 a3=0 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.191000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.191000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.191000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=fffffc1138c8 a2=4 a3=3 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.191000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.191000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.191000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.191000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.191000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.191000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.191000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.191000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.191000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.191000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.191000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.191000 audit[3727]: AVC avc: denied { confidentiality } for pid=3727 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 16 00:43:57.191000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffffc1138a8 a2=94 a3=6 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.191000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.192000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.192000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.192000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.192000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.192000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.192000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.192000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.192000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.192000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.192000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.192000 audit[3727]: AVC avc: denied { confidentiality } for pid=3727 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 16 00:43:57.192000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffffc113078 a2=94 a3=83 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.192000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.193000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.193000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.193000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.193000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.193000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.193000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.193000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.193000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.193000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.193000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.193000 audit[3727]: AVC avc: denied { confidentiality } for pid=3727 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 16 00:43:57.193000 audit[3727]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffffc113078 a2=94 a3=83 items=0 ppid=3600 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.193000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 16 00:43:57.207000 audit[3756]: AVC avc: denied { bpf } for pid=3756 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.207000 audit[3756]: AVC avc: denied { bpf } for pid=3756 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.207000 audit[3756]: AVC avc: denied { perfmon } for pid=3756 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.207000 audit[3756]: AVC avc: denied { perfmon } for pid=3756 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.207000 audit[3756]: AVC avc: denied { perfmon } for pid=3756 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.207000 audit[3756]: AVC avc: denied { perfmon } for pid=3756 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.207000 audit[3756]: AVC avc: denied { perfmon } for pid=3756 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.207000 audit[3756]: AVC avc: denied { bpf } for pid=3756 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.207000 audit[3756]: AVC avc: denied { bpf } for pid=3756 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.207000 audit: BPF prog-id=15 op=LOAD May 16 00:43:57.207000 audit[3756]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff284c178 a2=98 a3=fffff284c168 items=0 ppid=3600 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.207000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 16 00:43:57.208000 audit: BPF prog-id=15 op=UNLOAD May 16 00:43:57.208000 audit[3756]: AVC avc: denied { bpf } for pid=3756 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.208000 audit[3756]: AVC avc: denied { bpf } for pid=3756 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.208000 audit[3756]: AVC avc: denied { perfmon } for pid=3756 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.208000 audit[3756]: AVC avc: denied { perfmon } for pid=3756 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.208000 audit[3756]: AVC avc: denied { perfmon } for pid=3756 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.208000 audit[3756]: AVC avc: denied { perfmon } for pid=3756 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.208000 audit[3756]: AVC avc: denied { perfmon } for pid=3756 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.208000 audit[3756]: AVC avc: denied { bpf } for pid=3756 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.208000 audit[3756]: AVC avc: denied { bpf } for pid=3756 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.208000 audit: BPF prog-id=16 op=LOAD May 16 00:43:57.208000 audit[3756]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff284c028 a2=74 a3=95 items=0 ppid=3600 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.208000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 16 00:43:57.209000 audit: BPF prog-id=16 op=UNLOAD May 16 00:43:57.209000 audit[3756]: AVC avc: denied { bpf } for pid=3756 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.209000 audit[3756]: AVC avc: denied { bpf } for pid=3756 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.209000 audit[3756]: AVC avc: denied { perfmon } for pid=3756 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.209000 audit[3756]: AVC avc: denied { perfmon } for pid=3756 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.209000 audit[3756]: AVC avc: denied { perfmon } for pid=3756 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.209000 audit[3756]: AVC avc: denied { perfmon } for pid=3756 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.209000 audit[3756]: AVC avc: denied { perfmon } for pid=3756 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.209000 audit[3756]: AVC avc: denied { bpf } for pid=3756 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.209000 audit[3756]: AVC avc: denied { bpf } for pid=3756 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.209000 audit: BPF prog-id=17 op=LOAD May 16 00:43:57.209000 audit[3756]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff284c058 a2=40 a3=fffff284c088 items=0 ppid=3600 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.209000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 16 00:43:57.209000 audit: BPF prog-id=17 op=UNLOAD May 16 00:43:57.268663 systemd-networkd[1102]: vxlan.calico: Link UP May 16 00:43:57.268669 systemd-networkd[1102]: vxlan.calico: Gained carrier May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit: BPF prog-id=18 op=LOAD May 16 00:43:57.289000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff4e99cc8 a2=98 a3=fffff4e99cb8 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.289000 audit: BPF prog-id=18 op=UNLOAD May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit: BPF prog-id=19 op=LOAD May 16 00:43:57.289000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff4e999a8 a2=74 a3=95 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.289000 audit: BPF prog-id=19 op=UNLOAD May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit: BPF prog-id=20 op=LOAD May 16 00:43:57.289000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff4e99a08 a2=94 a3=2 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.289000 audit: BPF prog-id=20 op=UNLOAD May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff4e99a38 a2=28 a3=fffff4e99b68 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff4e99a68 a2=28 a3=fffff4e99b98 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff4e99918 a2=28 a3=fffff4e99a48 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff4e99a88 a2=28 a3=fffff4e99bb8 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff4e99a68 a2=28 a3=fffff4e99b98 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff4e99a58 a2=28 a3=fffff4e99b88 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff4e99a88 a2=28 a3=fffff4e99bb8 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff4e99a68 a2=28 a3=fffff4e99b98 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff4e99a88 a2=28 a3=fffff4e99bb8 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff4e99a58 a2=28 a3=fffff4e99b88 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff4e99ad8 a2=28 a3=fffff4e99c18 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.289000 audit: BPF prog-id=21 op=LOAD May 16 00:43:57.289000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff4e998f8 a2=40 a3=fffff4e99928 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.289000 audit: BPF prog-id=21 op=UNLOAD May 16 00:43:57.290000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=fffff4e99920 a2=50 a3=0 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.290000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=fffff4e99920 a2=50 a3=0 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.290000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit: BPF prog-id=22 op=LOAD May 16 00:43:57.290000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff4e99088 a2=94 a3=2 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.290000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.290000 audit: BPF prog-id=22 op=UNLOAD May 16 00:43:57.290000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { perfmon } for pid=3789 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit[3789]: AVC avc: denied { bpf } for pid=3789 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.290000 audit: BPF prog-id=23 op=LOAD May 16 00:43:57.290000 audit[3789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff4e99218 a2=94 a3=30 items=0 ppid=3600 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.290000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit: BPF prog-id=24 op=LOAD May 16 00:43:57.293000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff6fada48 a2=98 a3=fffff6fada38 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.293000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.293000 audit: BPF prog-id=24 op=UNLOAD May 16 00:43:57.293000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit: BPF prog-id=25 op=LOAD May 16 00:43:57.293000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff6fad6d8 a2=74 a3=95 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.293000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.293000 audit: BPF prog-id=25 op=UNLOAD May 16 00:43:57.293000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.293000 audit: BPF prog-id=26 op=LOAD May 16 00:43:57.293000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff6fad738 a2=94 a3=2 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.293000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.293000 audit: BPF prog-id=26 op=UNLOAD May 16 00:43:57.382000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.382000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.382000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.382000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.382000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.382000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.382000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.382000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.382000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.382000 audit: BPF prog-id=27 op=LOAD May 16 00:43:57.382000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff6fad6f8 a2=40 a3=fffff6fad728 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.382000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.382000 audit: BPF prog-id=27 op=UNLOAD May 16 00:43:57.382000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.382000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=fffff6fad810 a2=50 a3=0 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.382000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff6fad768 a2=28 a3=fffff6fad898 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff6fad798 a2=28 a3=fffff6fad8c8 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff6fad648 a2=28 a3=fffff6fad778 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff6fad7b8 a2=28 a3=fffff6fad8e8 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff6fad798 a2=28 a3=fffff6fad8c8 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff6fad788 a2=28 a3=fffff6fad8b8 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff6fad7b8 a2=28 a3=fffff6fad8e8 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff6fad798 a2=28 a3=fffff6fad8c8 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff6fad7b8 a2=28 a3=fffff6fad8e8 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff6fad788 a2=28 a3=fffff6fad8b8 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff6fad808 a2=28 a3=fffff6fad948 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff6fad540 a2=50 a3=0 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit: BPF prog-id=28 op=LOAD May 16 00:43:57.391000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff6fad548 a2=94 a3=5 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.391000 audit: BPF prog-id=28 op=UNLOAD May 16 00:43:57.391000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff6fad650 a2=50 a3=0 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=fffff6fad798 a2=4 a3=3 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.391000 audit[3791]: AVC avc: denied { confidentiality } for pid=3791 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 16 00:43:57.391000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff6fad778 a2=94 a3=6 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { confidentiality } for pid=3791 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 16 00:43:57.392000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff6facf48 a2=94 a3=83 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.392000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { confidentiality } for pid=3791 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 16 00:43:57.392000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff6facf48 a2=94 a3=83 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.392000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff6fae988 a2=10 a3=fffff6faea78 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.392000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff6fae848 a2=10 a3=fffff6fae938 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.392000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff6fae7b8 a2=10 a3=fffff6fae938 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.392000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.392000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 16 00:43:57.392000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff6fae7b8 a2=10 a3=fffff6fae938 items=0 ppid=3600 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.392000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 16 00:43:57.403000 audit: BPF prog-id=23 op=UNLOAD May 16 00:43:57.458000 audit[3821]: NETFILTER_CFG table=mangle:103 family=2 entries=16 op=nft_register_chain pid=3821 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 16 00:43:57.458000 audit[3821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffe4463980 a2=0 a3=ffffbe27efa8 items=0 ppid=3600 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.458000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 16 00:43:57.462000 audit[3820]: NETFILTER_CFG table=nat:104 family=2 entries=15 op=nft_register_chain pid=3820 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 16 00:43:57.462000 audit[3820]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffed515e40 a2=0 a3=ffffba12efa8 items=0 ppid=3600 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.462000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 16 00:43:57.472000 audit[3819]: NETFILTER_CFG table=raw:105 family=2 entries=21 op=nft_register_chain pid=3819 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 16 00:43:57.472000 audit[3819]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffe5975ca0 a2=0 a3=ffff8e7a5fa8 items=0 ppid=3600 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.472000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 16 00:43:57.473000 audit[3824]: NETFILTER_CFG table=filter:106 family=2 entries=94 op=nft_register_chain pid=3824 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 16 00:43:57.473000 audit[3824]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=fffffe471450 a2=0 a3=ffffa7c80fa8 items=0 ppid=3600 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:57.473000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 16 00:43:57.769862 kubelet[2112]: I0516 00:43:57.769599 2112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cabb5372-9b11-40f2-abd0-e05e56644a15" path="/var/lib/kubelet/pods/cabb5372-9b11-40f2-abd0-e05e56644a15/volumes" May 16 00:43:57.866345 systemd[1]: run-containerd-runc-k8s.io-d5bb17f73075f8c9512bf12618a156e07fd87d464525736df8a96a4da4a1d56e-runc.uf9TDp.mount: Deactivated successfully. May 16 00:43:57.943545 kubelet[2112]: E0516 00:43:57.943490 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-788588bcd7-c7kkr" podUID="083ed115-d2c3-4e81-b1aa-73fbcace47ab" May 16 00:43:58.003684 systemd-networkd[1102]: cali5f282276a86: Gained IPv6LL May 16 00:43:58.380360 systemd-networkd[1102]: vxlan.calico: Gained IPv6LL May 16 00:43:58.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.81:22-10.0.0.1:38936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:58.775417 systemd[1]: Started sshd@7-10.0.0.81:22-10.0.0.1:38936.service. May 16 00:43:58.820000 audit[3861]: USER_ACCT pid=3861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:43:58.821812 sshd[3861]: Accepted publickey for core from 10.0.0.1 port 38936 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:43:58.822000 audit[3861]: CRED_ACQ pid=3861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:43:58.822000 audit[3861]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe6a98380 a2=3 a3=1 items=0 ppid=1 pid=3861 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:58.822000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 16 00:43:58.823378 sshd[3861]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:43:58.827293 systemd-logind[1310]: New session 8 of user core. May 16 00:43:58.828249 systemd[1]: Started session-8.scope. May 16 00:43:58.830000 audit[3861]: USER_START pid=3861 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:43:58.832000 audit[3864]: CRED_ACQ pid=3864 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:43:59.012539 sshd[3861]: pam_unix(sshd:session): session closed for user core May 16 00:43:59.012000 audit[3861]: USER_END pid=3861 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:43:59.012000 audit[3861]: CRED_DISP pid=3861 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:43:59.014983 systemd[1]: sshd@7-10.0.0.81:22-10.0.0.1:38936.service: Deactivated successfully. May 16 00:43:59.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.81:22-10.0.0.1:38936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:59.016355 systemd-logind[1310]: Session 8 logged out. Waiting for processes to exit. May 16 00:43:59.016398 systemd[1]: session-8.scope: Deactivated successfully. May 16 00:43:59.017116 systemd-logind[1310]: Removed session 8. May 16 00:44:01.767426 env[1326]: time="2025-05-16T00:44:01.767374157Z" level=info msg="StopPodSandbox for \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\"" May 16 00:44:01.844788 env[1326]: 2025-05-16 00:44:01.809 [INFO][3902] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" May 16 00:44:01.844788 env[1326]: 2025-05-16 00:44:01.809 [INFO][3902] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" iface="eth0" netns="/var/run/netns/cni-cb990344-3e8d-5b7c-87e5-135b7baa0ca8" May 16 00:44:01.844788 env[1326]: 2025-05-16 00:44:01.809 [INFO][3902] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" iface="eth0" netns="/var/run/netns/cni-cb990344-3e8d-5b7c-87e5-135b7baa0ca8" May 16 00:44:01.844788 env[1326]: 2025-05-16 00:44:01.809 [INFO][3902] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" iface="eth0" netns="/var/run/netns/cni-cb990344-3e8d-5b7c-87e5-135b7baa0ca8" May 16 00:44:01.844788 env[1326]: 2025-05-16 00:44:01.810 [INFO][3902] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" May 16 00:44:01.844788 env[1326]: 2025-05-16 00:44:01.810 [INFO][3902] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" May 16 00:44:01.844788 env[1326]: 2025-05-16 00:44:01.831 [INFO][3911] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" HandleID="k8s-pod-network.14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" Workload="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" May 16 00:44:01.844788 env[1326]: 2025-05-16 00:44:01.832 [INFO][3911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:01.844788 env[1326]: 2025-05-16 00:44:01.832 [INFO][3911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:01.844788 env[1326]: 2025-05-16 00:44:01.840 [WARNING][3911] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" HandleID="k8s-pod-network.14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" Workload="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" May 16 00:44:01.844788 env[1326]: 2025-05-16 00:44:01.840 [INFO][3911] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" HandleID="k8s-pod-network.14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" Workload="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" May 16 00:44:01.844788 env[1326]: 2025-05-16 00:44:01.841 [INFO][3911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:01.844788 env[1326]: 2025-05-16 00:44:01.843 [INFO][3902] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" May 16 00:44:01.847552 env[1326]: time="2025-05-16T00:44:01.844897128Z" level=info msg="TearDown network for sandbox \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\" successfully" May 16 00:44:01.847552 env[1326]: time="2025-05-16T00:44:01.844929528Z" level=info msg="StopPodSandbox for \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\" returns successfully" May 16 00:44:01.847152 systemd[1]: run-netns-cni\x2dcb990344\x2d3e8d\x2d5b7c\x2d87e5\x2d135b7baa0ca8.mount: Deactivated successfully. May 16 00:44:01.848132 env[1326]: time="2025-05-16T00:44:01.848036696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-kxr6q,Uid:74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82,Namespace:calico-system,Attempt:1,}" May 16 00:44:01.958424 systemd-networkd[1102]: calia84cab58e0a: Link UP May 16 00:44:01.960305 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:44:01.960388 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia84cab58e0a: link becomes ready May 16 00:44:01.960554 systemd-networkd[1102]: calia84cab58e0a: Gained carrier May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.894 [INFO][3918] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0 goldmane-8f77d7b6c- calico-system 74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82 1033 0 2025-05-16 00:43:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-8f77d7b6c-kxr6q eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia84cab58e0a [] [] }} ContainerID="6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" Namespace="calico-system" Pod="goldmane-8f77d7b6c-kxr6q" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--kxr6q-" May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.894 [INFO][3918] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" Namespace="calico-system" Pod="goldmane-8f77d7b6c-kxr6q" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.918 [INFO][3933] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" HandleID="k8s-pod-network.6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" Workload="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.918 [INFO][3933] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" HandleID="k8s-pod-network.6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" Workload="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a7220), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-8f77d7b6c-kxr6q", "timestamp":"2025-05-16 00:44:01.918179022 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.918 [INFO][3933] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.918 [INFO][3933] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.918 [INFO][3933] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.929 [INFO][3933] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" host="localhost" May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.933 [INFO][3933] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.937 [INFO][3933] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.940 [INFO][3933] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.942 [INFO][3933] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.942 [INFO][3933] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" host="localhost" May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.943 [INFO][3933] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5 May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.947 [INFO][3933] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" host="localhost" May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.952 [INFO][3933] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" host="localhost" May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.952 [INFO][3933] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" host="localhost" May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.952 [INFO][3933] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:01.973283 env[1326]: 2025-05-16 00:44:01.952 [INFO][3933] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" HandleID="k8s-pod-network.6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" Workload="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" May 16 00:44:01.973867 env[1326]: 2025-05-16 00:44:01.955 [INFO][3918] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" Namespace="calico-system" Pod="goldmane-8f77d7b6c-kxr6q" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-8f77d7b6c-kxr6q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia84cab58e0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:01.973867 env[1326]: 2025-05-16 00:44:01.956 [INFO][3918] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" Namespace="calico-system" Pod="goldmane-8f77d7b6c-kxr6q" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" May 16 00:44:01.973867 env[1326]: 2025-05-16 00:44:01.956 [INFO][3918] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia84cab58e0a ContainerID="6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" Namespace="calico-system" Pod="goldmane-8f77d7b6c-kxr6q" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" May 16 00:44:01.973867 env[1326]: 2025-05-16 00:44:01.960 [INFO][3918] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" Namespace="calico-system" Pod="goldmane-8f77d7b6c-kxr6q" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" May 16 00:44:01.973867 env[1326]: 2025-05-16 00:44:01.961 [INFO][3918] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" Namespace="calico-system" Pod="goldmane-8f77d7b6c-kxr6q" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5", Pod:"goldmane-8f77d7b6c-kxr6q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia84cab58e0a", MAC:"26:20:7c:38:58:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:01.973867 env[1326]: 2025-05-16 00:44:01.968 [INFO][3918] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5" Namespace="calico-system" Pod="goldmane-8f77d7b6c-kxr6q" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" May 16 00:44:01.984410 env[1326]: time="2025-05-16T00:44:01.984332228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:44:01.984563 env[1326]: time="2025-05-16T00:44:01.984384588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:44:01.984563 env[1326]: time="2025-05-16T00:44:01.984396068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:44:01.984664 env[1326]: time="2025-05-16T00:44:01.984540786Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5 pid=3962 runtime=io.containerd.runc.v2 May 16 00:44:01.995175 kernel: kauditd_printk_skb: 522 callbacks suppressed May 16 00:44:01.995271 kernel: audit: type=1325 audit(1747356241.984:409): table=filter:107 family=2 entries=44 op=nft_register_chain pid=3967 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 16 00:44:01.995298 kernel: audit: type=1300 audit(1747356241.984:409): arch=c00000b7 syscall=211 success=yes exit=25180 a0=3 a1=fffff9049da0 a2=0 a3=ffff90a7cfa8 items=0 ppid=3600 pid=3967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:01.995321 kernel: audit: type=1327 audit(1747356241.984:409): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 16 00:44:01.984000 audit[3967]: NETFILTER_CFG table=filter:107 family=2 entries=44 op=nft_register_chain pid=3967 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 16 00:44:01.984000 audit[3967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25180 a0=3 a1=fffff9049da0 a2=0 a3=ffff90a7cfa8 items=0 ppid=3600 pid=3967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:01.984000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 16 00:44:02.031665 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:44:02.052512 env[1326]: time="2025-05-16T00:44:02.052474149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-kxr6q,Uid:74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82,Namespace:calico-system,Attempt:1,} returns sandbox id \"6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5\"" May 16 00:44:02.054990 env[1326]: time="2025-05-16T00:44:02.054143772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 00:44:02.214226 env[1326]: time="2025-05-16T00:44:02.214147066Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 16 00:44:02.214990 env[1326]: time="2025-05-16T00:44:02.214930978Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 16 00:44:02.215237 kubelet[2112]: E0516 00:44:02.215174 2112 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:44:02.215805 kubelet[2112]: E0516 00:44:02.215243 2112 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:44:02.215805 kubelet[2112]: E0516 00:44:02.215375 2112 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gz7jz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-kxr6q_calico-system(74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 16 00:44:02.217007 kubelet[2112]: E0516 00:44:02.216975 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-kxr6q" podUID="74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82" May 16 00:44:02.767921 env[1326]: time="2025-05-16T00:44:02.767807419Z" level=info msg="StopPodSandbox for \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\"" May 16 00:44:02.846150 env[1326]: 2025-05-16 00:44:02.812 [INFO][4014] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" May 16 00:44:02.846150 env[1326]: 2025-05-16 00:44:02.812 [INFO][4014] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" iface="eth0" netns="/var/run/netns/cni-754a9bc9-d8c2-260e-3533-81db03e585d7" May 16 00:44:02.846150 env[1326]: 2025-05-16 00:44:02.812 [INFO][4014] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" iface="eth0" netns="/var/run/netns/cni-754a9bc9-d8c2-260e-3533-81db03e585d7" May 16 00:44:02.846150 env[1326]: 2025-05-16 00:44:02.812 [INFO][4014] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" iface="eth0" netns="/var/run/netns/cni-754a9bc9-d8c2-260e-3533-81db03e585d7" May 16 00:44:02.846150 env[1326]: 2025-05-16 00:44:02.812 [INFO][4014] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" May 16 00:44:02.846150 env[1326]: 2025-05-16 00:44:02.812 [INFO][4014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" May 16 00:44:02.846150 env[1326]: 2025-05-16 00:44:02.831 [INFO][4023] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" HandleID="k8s-pod-network.34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" May 16 00:44:02.846150 env[1326]: 2025-05-16 00:44:02.831 [INFO][4023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:02.846150 env[1326]: 2025-05-16 00:44:02.831 [INFO][4023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:02.846150 env[1326]: 2025-05-16 00:44:02.840 [WARNING][4023] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" HandleID="k8s-pod-network.34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" May 16 00:44:02.846150 env[1326]: 2025-05-16 00:44:02.841 [INFO][4023] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" HandleID="k8s-pod-network.34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" May 16 00:44:02.846150 env[1326]: 2025-05-16 00:44:02.842 [INFO][4023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:02.846150 env[1326]: 2025-05-16 00:44:02.844 [INFO][4014] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" May 16 00:44:02.848476 systemd[1]: run-netns-cni\x2d754a9bc9\x2dd8c2\x2d260e\x2d3533\x2d81db03e585d7.mount: Deactivated successfully. May 16 00:44:02.848782 env[1326]: time="2025-05-16T00:44:02.848588658Z" level=info msg="TearDown network for sandbox \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\" successfully" May 16 00:44:02.848782 env[1326]: time="2025-05-16T00:44:02.848629018Z" level=info msg="StopPodSandbox for \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\" returns successfully" May 16 00:44:02.849705 env[1326]: time="2025-05-16T00:44:02.849672407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5c695cc5-mf7nf,Uid:0d68fd98-e1d9-442d-9586-2d60cebfa71e,Namespace:calico-apiserver,Attempt:1,}" May 16 00:44:02.953032 kubelet[2112]: E0516 00:44:02.952996 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-kxr6q" podUID="74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82" May 16 00:44:02.991000 audit[4056]: NETFILTER_CFG table=filter:108 family=2 entries=20 op=nft_register_rule pid=4056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:02.991000 audit[4056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd0717550 a2=0 a3=1 items=0 ppid=2257 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:02.997654 kernel: audit: type=1325 audit(1747356242.991:410): table=filter:108 family=2 entries=20 op=nft_register_rule pid=4056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:02.997747 kernel: audit: type=1300 audit(1747356242.991:410): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd0717550 a2=0 a3=1 items=0 ppid=2257 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:02.997778 kernel: audit: type=1327 audit(1747356242.991:410): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:02.991000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:02.999000 audit[4056]: NETFILTER_CFG table=nat:109 family=2 entries=14 op=nft_register_rule pid=4056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:02.999000 audit[4056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd0717550 a2=0 a3=1 items=0 ppid=2257 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:03.005422 kernel: audit: type=1325 audit(1747356242.999:411): table=nat:109 family=2 entries=14 op=nft_register_rule pid=4056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:03.005477 kernel: audit: type=1300 audit(1747356242.999:411): arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd0717550 a2=0 a3=1 items=0 ppid=2257 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:03.007332 kernel: audit: type=1327 audit(1747356242.999:411): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:02.999000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:03.012230 systemd-networkd[1102]: calib48aa81fc66: Link UP May 16 00:44:03.016908 systemd-networkd[1102]: calib48aa81fc66: Gained carrier May 16 00:44:03.017093 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib48aa81fc66: link becomes ready May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:02.910 [INFO][4031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0 calico-apiserver-7d5c695cc5- calico-apiserver 0d68fd98-e1d9-442d-9586-2d60cebfa71e 1050 0 2025-05-16 00:43:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d5c695cc5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d5c695cc5-mf7nf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib48aa81fc66 [] [] }} ContainerID="a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" Namespace="calico-apiserver" Pod="calico-apiserver-7d5c695cc5-mf7nf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-" May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:02.910 [INFO][4031] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" Namespace="calico-apiserver" Pod="calico-apiserver-7d5c695cc5-mf7nf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:02.946 [INFO][4046] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" HandleID="k8s-pod-network.a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:02.946 [INFO][4046] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" HandleID="k8s-pod-network.a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003acb40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d5c695cc5-mf7nf", "timestamp":"2025-05-16 00:44:02.946050972 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:02.946 [INFO][4046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:02.946 [INFO][4046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:02.946 [INFO][4046] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:02.960 [INFO][4046] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" host="localhost" May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:02.970 [INFO][4046] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:02.979 [INFO][4046] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:02.989 [INFO][4046] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:02.992 [INFO][4046] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:02.992 [INFO][4046] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" host="localhost" May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:02.994 [INFO][4046] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:03.000 [INFO][4046] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" host="localhost" May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:03.007 [INFO][4046] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" host="localhost" May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:03.007 [INFO][4046] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" host="localhost" May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:03.007 [INFO][4046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:03.031498 env[1326]: 2025-05-16 00:44:03.008 [INFO][4046] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" HandleID="k8s-pod-network.a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" May 16 00:44:03.032169 env[1326]: 2025-05-16 00:44:03.010 [INFO][4031] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" Namespace="calico-apiserver" Pod="calico-apiserver-7d5c695cc5-mf7nf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0", GenerateName:"calico-apiserver-7d5c695cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d68fd98-e1d9-442d-9586-2d60cebfa71e", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5c695cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d5c695cc5-mf7nf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib48aa81fc66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:03.032169 env[1326]: 2025-05-16 00:44:03.010 [INFO][4031] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" Namespace="calico-apiserver" Pod="calico-apiserver-7d5c695cc5-mf7nf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" May 16 00:44:03.032169 env[1326]: 2025-05-16 00:44:03.010 [INFO][4031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib48aa81fc66 ContainerID="a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" Namespace="calico-apiserver" Pod="calico-apiserver-7d5c695cc5-mf7nf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" May 16 00:44:03.032169 env[1326]: 2025-05-16 00:44:03.012 [INFO][4031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" Namespace="calico-apiserver" Pod="calico-apiserver-7d5c695cc5-mf7nf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" May 16 00:44:03.032169 env[1326]: 2025-05-16 00:44:03.017 [INFO][4031] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" Namespace="calico-apiserver" Pod="calico-apiserver-7d5c695cc5-mf7nf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0", GenerateName:"calico-apiserver-7d5c695cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d68fd98-e1d9-442d-9586-2d60cebfa71e", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5c695cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa", Pod:"calico-apiserver-7d5c695cc5-mf7nf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib48aa81fc66", MAC:"26:2a:73:b5:ab:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:03.032169 env[1326]: 2025-05-16 00:44:03.027 [INFO][4031] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa" Namespace="calico-apiserver" Pod="calico-apiserver-7d5c695cc5-mf7nf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" May 16 00:44:03.041000 audit[4080]: NETFILTER_CFG table=filter:110 family=2 entries=54 op=nft_register_chain pid=4080 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 16 00:44:03.045239 env[1326]: time="2025-05-16T00:44:03.040468686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:44:03.045239 env[1326]: time="2025-05-16T00:44:03.040514766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:44:03.045239 env[1326]: time="2025-05-16T00:44:03.040525086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:44:03.045239 env[1326]: time="2025-05-16T00:44:03.040704164Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa pid=4072 runtime=io.containerd.runc.v2 May 16 00:44:03.041000 audit[4080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29396 a0=3 a1=ffffdbef4700 a2=0 a3=ffff998f3fa8 items=0 ppid=3600 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:03.041000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 16 00:44:03.046977 kernel: audit: type=1325 audit(1747356243.041:412): table=filter:110 family=2 entries=54 op=nft_register_chain pid=4080 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 16 00:44:03.075925 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:44:03.099905 env[1326]: time="2025-05-16T00:44:03.099847433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5c695cc5-mf7nf,Uid:0d68fd98-e1d9-442d-9586-2d60cebfa71e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa\"" May 16 00:44:03.102195 env[1326]: time="2025-05-16T00:44:03.102144411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 16 00:44:03.500113 systemd-networkd[1102]: calia84cab58e0a: Gained IPv6LL May 16 00:44:03.768713 env[1326]: time="2025-05-16T00:44:03.768498655Z" level=info msg="StopPodSandbox for \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\"" May 16 00:44:03.769141 env[1326]: time="2025-05-16T00:44:03.769082369Z" level=info msg="StopPodSandbox for \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\"" May 16 00:44:03.875889 env[1326]: 2025-05-16 00:44:03.827 [INFO][4131] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" May 16 00:44:03.875889 env[1326]: 2025-05-16 00:44:03.827 [INFO][4131] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" iface="eth0" netns="/var/run/netns/cni-6e24c833-07f0-9db0-7703-1fb5128c6b93" May 16 00:44:03.875889 env[1326]: 2025-05-16 00:44:03.828 [INFO][4131] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" iface="eth0" netns="/var/run/netns/cni-6e24c833-07f0-9db0-7703-1fb5128c6b93" May 16 00:44:03.875889 env[1326]: 2025-05-16 00:44:03.828 [INFO][4131] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" iface="eth0" netns="/var/run/netns/cni-6e24c833-07f0-9db0-7703-1fb5128c6b93" May 16 00:44:03.875889 env[1326]: 2025-05-16 00:44:03.828 [INFO][4131] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" May 16 00:44:03.875889 env[1326]: 2025-05-16 00:44:03.828 [INFO][4131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" May 16 00:44:03.875889 env[1326]: 2025-05-16 00:44:03.853 [INFO][4147] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" HandleID="k8s-pod-network.527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" Workload="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" May 16 00:44:03.875889 env[1326]: 2025-05-16 00:44:03.855 [INFO][4147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:03.875889 env[1326]: 2025-05-16 00:44:03.855 [INFO][4147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:03.875889 env[1326]: 2025-05-16 00:44:03.865 [WARNING][4147] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" HandleID="k8s-pod-network.527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" Workload="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" May 16 00:44:03.875889 env[1326]: 2025-05-16 00:44:03.865 [INFO][4147] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" HandleID="k8s-pod-network.527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" Workload="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" May 16 00:44:03.875889 env[1326]: 2025-05-16 00:44:03.867 [INFO][4147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:03.875889 env[1326]: 2025-05-16 00:44:03.872 [INFO][4131] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" May 16 00:44:03.878332 systemd[1]: run-netns-cni\x2d6e24c833\x2d07f0\x2d9db0\x2d7703\x2d1fb5128c6b93.mount: Deactivated successfully. May 16 00:44:03.879538 env[1326]: time="2025-05-16T00:44:03.879499223Z" level=info msg="TearDown network for sandbox \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\" successfully" May 16 00:44:03.879641 env[1326]: time="2025-05-16T00:44:03.879623461Z" level=info msg="StopPodSandbox for \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\" returns successfully" May 16 00:44:03.880066 kubelet[2112]: E0516 00:44:03.880041 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:03.881202 env[1326]: time="2025-05-16T00:44:03.881170167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nlbz8,Uid:fe218a81-50db-479d-bb87-757c8c52f897,Namespace:kube-system,Attempt:1,}" May 16 00:44:03.887704 env[1326]: 2025-05-16 00:44:03.834 [INFO][4132] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" May 16 00:44:03.887704 env[1326]: 2025-05-16 00:44:03.834 [INFO][4132] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" iface="eth0" netns="/var/run/netns/cni-cde5fa67-dc7c-a04b-f227-dc387b033f80" May 16 00:44:03.887704 env[1326]: 2025-05-16 00:44:03.834 [INFO][4132] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" iface="eth0" netns="/var/run/netns/cni-cde5fa67-dc7c-a04b-f227-dc387b033f80" May 16 00:44:03.887704 env[1326]: 2025-05-16 00:44:03.834 [INFO][4132] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" iface="eth0" netns="/var/run/netns/cni-cde5fa67-dc7c-a04b-f227-dc387b033f80" May 16 00:44:03.887704 env[1326]: 2025-05-16 00:44:03.834 [INFO][4132] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" May 16 00:44:03.887704 env[1326]: 2025-05-16 00:44:03.834 [INFO][4132] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" May 16 00:44:03.887704 env[1326]: 2025-05-16 00:44:03.867 [INFO][4153] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" HandleID="k8s-pod-network.0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" May 16 00:44:03.887704 env[1326]: 2025-05-16 00:44:03.867 [INFO][4153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:03.887704 env[1326]: 2025-05-16 00:44:03.867 [INFO][4153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:03.887704 env[1326]: 2025-05-16 00:44:03.879 [WARNING][4153] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" HandleID="k8s-pod-network.0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" May 16 00:44:03.887704 env[1326]: 2025-05-16 00:44:03.879 [INFO][4153] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" HandleID="k8s-pod-network.0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" May 16 00:44:03.887704 env[1326]: 2025-05-16 00:44:03.883 [INFO][4153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:03.887704 env[1326]: 2025-05-16 00:44:03.885 [INFO][4132] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" May 16 00:44:03.890090 systemd[1]: run-netns-cni\x2dcde5fa67\x2ddc7c\x2da04b\x2df227\x2ddc387b033f80.mount: Deactivated successfully. May 16 00:44:03.891141 env[1326]: time="2025-05-16T00:44:03.890549916Z" level=info msg="TearDown network for sandbox \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\" successfully" May 16 00:44:03.891141 env[1326]: time="2025-05-16T00:44:03.890589156Z" level=info msg="StopPodSandbox for \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\" returns successfully" May 16 00:44:03.891468 env[1326]: time="2025-05-16T00:44:03.891233989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5c695cc5-gsjpm,Uid:78479183-5b0e-4e14-9b65-379d830097f9,Namespace:calico-apiserver,Attempt:1,}" May 16 00:44:03.957833 kubelet[2112]: E0516 00:44:03.957784 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-kxr6q" podUID="74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82" May 16 00:44:04.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.81:22-10.0.0.1:48700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:04.015913 systemd[1]: Started sshd@8-10.0.0.81:22-10.0.0.1:48700.service. May 16 00:44:04.041498 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:44:04.041605 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliac4a1c52394: link becomes ready May 16 00:44:04.043111 systemd-networkd[1102]: caliac4a1c52394: Link UP May 16 00:44:04.043328 systemd-networkd[1102]: caliac4a1c52394: Gained carrier May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:03.943 [INFO][4163] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0 coredns-7c65d6cfc9- kube-system fe218a81-50db-479d-bb87-757c8c52f897 1067 0 2025-05-16 00:43:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-nlbz8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliac4a1c52394 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nlbz8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nlbz8-" May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:03.943 [INFO][4163] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nlbz8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:03.989 [INFO][4192] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" HandleID="k8s-pod-network.c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" Workload="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:03.989 [INFO][4192] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" HandleID="k8s-pod-network.c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" Workload="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000356220), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-nlbz8", "timestamp":"2025-05-16 00:44:03.989011885 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:03.989 [INFO][4192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:03.989 [INFO][4192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:03.989 [INFO][4192] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:03.998 [INFO][4192] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" host="localhost" May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:04.006 [INFO][4192] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:04.011 [INFO][4192] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:04.013 [INFO][4192] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:04.015 [INFO][4192] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:04.015 [INFO][4192] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" host="localhost" May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:04.017 [INFO][4192] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:04.021 [INFO][4192] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" host="localhost" May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:04.027 [INFO][4192] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" host="localhost" May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:04.027 [INFO][4192] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" host="localhost" May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:04.028 [INFO][4192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:04.060175 env[1326]: 2025-05-16 00:44:04.028 [INFO][4192] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" HandleID="k8s-pod-network.c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" Workload="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" May 16 00:44:04.060746 env[1326]: 2025-05-16 00:44:04.036 [INFO][4163] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nlbz8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fe218a81-50db-479d-bb87-757c8c52f897", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-nlbz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac4a1c52394", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:04.060746 env[1326]: 2025-05-16 00:44:04.037 [INFO][4163] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nlbz8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" May 16 00:44:04.060746 env[1326]: 2025-05-16 00:44:04.037 [INFO][4163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac4a1c52394 ContainerID="c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nlbz8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" May 16 00:44:04.060746 env[1326]: 2025-05-16 00:44:04.041 [INFO][4163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nlbz8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" May 16 00:44:04.060746 env[1326]: 2025-05-16 00:44:04.045 [INFO][4163] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nlbz8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fe218a81-50db-479d-bb87-757c8c52f897", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba", Pod:"coredns-7c65d6cfc9-nlbz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac4a1c52394", MAC:"46:fd:28:db:a9:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:04.060746 env[1326]: 2025-05-16 00:44:04.057 [INFO][4163] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nlbz8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" May 16 00:44:04.071535 env[1326]: time="2025-05-16T00:44:04.071462945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:44:04.071535 env[1326]: time="2025-05-16T00:44:04.071506945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:44:04.071709 env[1326]: time="2025-05-16T00:44:04.071516785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:44:04.072070 env[1326]: time="2025-05-16T00:44:04.072001100Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba pid=4226 runtime=io.containerd.runc.v2 May 16 00:44:04.071000 audit[4227]: NETFILTER_CFG table=filter:111 family=2 entries=56 op=nft_register_chain pid=4227 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 16 00:44:04.071000 audit[4227]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27780 a0=3 a1=ffffeffc0d40 a2=0 a3=ffff9b5bffa8 items=0 ppid=3600 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:04.071000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 16 00:44:04.077475 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 48700 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:04.076000 audit[4207]: USER_ACCT pid=4207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:04.077000 audit[4207]: CRED_ACQ pid=4207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:04.078000 audit[4207]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd6c9cba0 a2=3 a3=1 items=0 ppid=1 pid=4207 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:04.078000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 16 00:44:04.079381 sshd[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:04.084175 systemd-logind[1310]: New session 9 of user core. May 16 00:44:04.084574 systemd[1]: Started session-9.scope. May 16 00:44:04.091000 audit[4207]: USER_START pid=4207 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:04.093000 audit[4249]: CRED_ACQ pid=4249 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:04.138045 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:44:04.155520 systemd-networkd[1102]: cali1f300b9a84d: Link UP May 16 00:44:04.157611 systemd-networkd[1102]: cali1f300b9a84d: Gained carrier May 16 00:44:04.159660 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1f300b9a84d: link becomes ready May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:03.959 [INFO][4174] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0 calico-apiserver-7d5c695cc5- calico-apiserver 78479183-5b0e-4e14-9b65-379d830097f9 1068 0 2025-05-16 00:43:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d5c695cc5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d5c695cc5-gsjpm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1f300b9a84d [] [] }} ContainerID="b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" Namespace="calico-apiserver" Pod="calico-apiserver-7d5c695cc5-gsjpm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-" May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:03.960 [INFO][4174] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" Namespace="calico-apiserver" Pod="calico-apiserver-7d5c695cc5-gsjpm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:03.998 [INFO][4199] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" HandleID="k8s-pod-network.b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:03.998 [INFO][4199] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" HandleID="k8s-pod-network.b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000424250), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d5c695cc5-gsjpm", "timestamp":"2025-05-16 00:44:03.998077397 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:03.998 [INFO][4199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:04.028 [INFO][4199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:04.028 [INFO][4199] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:04.101 [INFO][4199] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" host="localhost" May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:04.105 [INFO][4199] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:04.111 [INFO][4199] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:04.118 [INFO][4199] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:04.123 [INFO][4199] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:04.123 [INFO][4199] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" host="localhost" May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:04.126 [INFO][4199] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:04.129 [INFO][4199] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" host="localhost" May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:04.142 [INFO][4199] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" host="localhost" May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:04.142 [INFO][4199] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" host="localhost" May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:04.142 [INFO][4199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:04.184459 env[1326]: 2025-05-16 00:44:04.142 [INFO][4199] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" HandleID="k8s-pod-network.b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" May 16 00:44:04.185049 env[1326]: 2025-05-16 00:44:04.151 [INFO][4174] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" Namespace="calico-apiserver" Pod="calico-apiserver-7d5c695cc5-gsjpm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0", GenerateName:"calico-apiserver-7d5c695cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"78479183-5b0e-4e14-9b65-379d830097f9", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5c695cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d5c695cc5-gsjpm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f300b9a84d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:04.185049 env[1326]: 2025-05-16 00:44:04.152 [INFO][4174] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" Namespace="calico-apiserver" Pod="calico-apiserver-7d5c695cc5-gsjpm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" May 16 00:44:04.185049 env[1326]: 2025-05-16 00:44:04.152 [INFO][4174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f300b9a84d ContainerID="b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" Namespace="calico-apiserver" Pod="calico-apiserver-7d5c695cc5-gsjpm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" May 16 00:44:04.185049 env[1326]: 2025-05-16 00:44:04.158 [INFO][4174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" Namespace="calico-apiserver" Pod="calico-apiserver-7d5c695cc5-gsjpm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" May 16 00:44:04.185049 env[1326]: 2025-05-16 00:44:04.162 [INFO][4174] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" Namespace="calico-apiserver" Pod="calico-apiserver-7d5c695cc5-gsjpm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0", GenerateName:"calico-apiserver-7d5c695cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"78479183-5b0e-4e14-9b65-379d830097f9", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5c695cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d", Pod:"calico-apiserver-7d5c695cc5-gsjpm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f300b9a84d", MAC:"72:af:76:f9:99:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:04.185049 env[1326]: 2025-05-16 00:44:04.177 [INFO][4174] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d" Namespace="calico-apiserver" Pod="calico-apiserver-7d5c695cc5-gsjpm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" May 16 00:44:04.185483 env[1326]: time="2025-05-16T00:44:04.185440071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nlbz8,Uid:fe218a81-50db-479d-bb87-757c8c52f897,Namespace:kube-system,Attempt:1,} returns sandbox id \"c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba\"" May 16 00:44:04.188533 kubelet[2112]: E0516 00:44:04.188137 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:04.195825 env[1326]: time="2025-05-16T00:44:04.195777254Z" level=info msg="CreateContainer within sandbox \"c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:44:04.198000 audit[4282]: NETFILTER_CFG table=filter:112 family=2 entries=45 op=nft_register_chain pid=4282 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 16 00:44:04.198000 audit[4282]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24248 a0=3 a1=fffff0f7c630 a2=0 a3=ffffa5f3efa8 items=0 ppid=3600 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:04.198000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 16 00:44:04.211432 env[1326]: time="2025-05-16T00:44:04.211389147Z" level=info msg="CreateContainer within sandbox \"c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f55fa3d9040837cd0f6afd4368d055cc9a9e8d4db615045814a3524960c34c9\"" May 16 00:44:04.213427 env[1326]: time="2025-05-16T00:44:04.212710695Z" level=info msg="StartContainer for \"5f55fa3d9040837cd0f6afd4368d055cc9a9e8d4db615045814a3524960c34c9\"" May 16 00:44:04.217130 env[1326]: time="2025-05-16T00:44:04.217050614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:44:04.217130 env[1326]: time="2025-05-16T00:44:04.217100373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:44:04.217130 env[1326]: time="2025-05-16T00:44:04.217111053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:44:04.217567 env[1326]: time="2025-05-16T00:44:04.217533089Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d pid=4291 runtime=io.containerd.runc.v2 May 16 00:44:04.283003 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:44:04.302836 env[1326]: time="2025-05-16T00:44:04.302738846Z" level=info msg="StartContainer for \"5f55fa3d9040837cd0f6afd4368d055cc9a9e8d4db615045814a3524960c34c9\" returns successfully" May 16 00:44:04.309570 sshd[4207]: pam_unix(sshd:session): session closed for user core May 16 00:44:04.310000 audit[4207]: USER_END pid=4207 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:04.310000 audit[4207]: CRED_DISP pid=4207 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:04.313337 systemd[1]: sshd@8-10.0.0.81:22-10.0.0.1:48700.service: Deactivated successfully. May 16 00:44:04.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.81:22-10.0.0.1:48700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:04.314034 env[1326]: time="2025-05-16T00:44:04.313947061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5c695cc5-gsjpm,Uid:78479183-5b0e-4e14-9b65-379d830097f9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d\"" May 16 00:44:04.314430 systemd[1]: session-9.scope: Deactivated successfully. May 16 00:44:04.314448 systemd-logind[1310]: Session 9 logged out. Waiting for processes to exit. May 16 00:44:04.317360 systemd-logind[1310]: Removed session 9. May 16 00:44:04.524166 systemd-networkd[1102]: calib48aa81fc66: Gained IPv6LL May 16 00:44:04.768387 env[1326]: time="2025-05-16T00:44:04.768333420Z" level=info msg="StopPodSandbox for \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\"" May 16 00:44:04.768696 env[1326]: time="2025-05-16T00:44:04.768353779Z" level=info msg="StopPodSandbox for \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\"" May 16 00:44:04.910578 env[1326]: 2025-05-16 00:44:04.816 [INFO][4386] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" May 16 00:44:04.910578 env[1326]: 2025-05-16 00:44:04.817 [INFO][4386] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" iface="eth0" netns="/var/run/netns/cni-0f1d6a3c-b8da-e135-c3b5-b9e0725e99fd" May 16 00:44:04.910578 env[1326]: 2025-05-16 00:44:04.817 [INFO][4386] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" iface="eth0" netns="/var/run/netns/cni-0f1d6a3c-b8da-e135-c3b5-b9e0725e99fd" May 16 00:44:04.910578 env[1326]: 2025-05-16 00:44:04.817 [INFO][4386] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" iface="eth0" netns="/var/run/netns/cni-0f1d6a3c-b8da-e135-c3b5-b9e0725e99fd" May 16 00:44:04.910578 env[1326]: 2025-05-16 00:44:04.817 [INFO][4386] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" May 16 00:44:04.910578 env[1326]: 2025-05-16 00:44:04.817 [INFO][4386] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" May 16 00:44:04.910578 env[1326]: 2025-05-16 00:44:04.876 [INFO][4404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" HandleID="k8s-pod-network.0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" Workload="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" May 16 00:44:04.910578 env[1326]: 2025-05-16 00:44:04.876 [INFO][4404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:04.910578 env[1326]: 2025-05-16 00:44:04.876 [INFO][4404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:04.910578 env[1326]: 2025-05-16 00:44:04.904 [WARNING][4404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" HandleID="k8s-pod-network.0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" Workload="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" May 16 00:44:04.910578 env[1326]: 2025-05-16 00:44:04.904 [INFO][4404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" HandleID="k8s-pod-network.0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" Workload="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" May 16 00:44:04.910578 env[1326]: 2025-05-16 00:44:04.907 [INFO][4404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:04.910578 env[1326]: 2025-05-16 00:44:04.908 [INFO][4386] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" May 16 00:44:04.911274 env[1326]: time="2025-05-16T00:44:04.910718358Z" level=info msg="TearDown network for sandbox \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\" successfully" May 16 00:44:04.911274 env[1326]: time="2025-05-16T00:44:04.910752998Z" level=info msg="StopPodSandbox for \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\" returns successfully" May 16 00:44:04.911682 env[1326]: time="2025-05-16T00:44:04.911645709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5648d477c5-hhnfn,Uid:b53c1672-73dc-401f-b6e2-787097ef7c61,Namespace:calico-system,Attempt:1,}" May 16 00:44:04.913078 systemd[1]: run-netns-cni\x2d0f1d6a3c\x2db8da\x2de135\x2dc3b5\x2db9e0725e99fd.mount: Deactivated successfully. May 16 00:44:04.964240 kubelet[2112]: E0516 00:44:04.964192 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:04.974016 env[1326]: 2025-05-16 00:44:04.886 [INFO][4387] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" May 16 00:44:04.974016 env[1326]: 2025-05-16 00:44:04.886 [INFO][4387] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" iface="eth0" netns="/var/run/netns/cni-62944d6b-760a-d0dc-32eb-44bb118d2600" May 16 00:44:04.974016 env[1326]: 2025-05-16 00:44:04.890 [INFO][4387] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" iface="eth0" netns="/var/run/netns/cni-62944d6b-760a-d0dc-32eb-44bb118d2600" May 16 00:44:04.974016 env[1326]: 2025-05-16 00:44:04.894 [INFO][4387] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" iface="eth0" netns="/var/run/netns/cni-62944d6b-760a-d0dc-32eb-44bb118d2600" May 16 00:44:04.974016 env[1326]: 2025-05-16 00:44:04.894 [INFO][4387] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" May 16 00:44:04.974016 env[1326]: 2025-05-16 00:44:04.894 [INFO][4387] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" May 16 00:44:04.974016 env[1326]: 2025-05-16 00:44:04.944 [INFO][4412] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" HandleID="k8s-pod-network.6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" Workload="localhost-k8s-csi--node--driver--d2kkd-eth0" May 16 00:44:04.974016 env[1326]: 2025-05-16 00:44:04.944 [INFO][4412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:04.974016 env[1326]: 2025-05-16 00:44:04.944 [INFO][4412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:04.974016 env[1326]: 2025-05-16 00:44:04.959 [WARNING][4412] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" HandleID="k8s-pod-network.6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" Workload="localhost-k8s-csi--node--driver--d2kkd-eth0" May 16 00:44:04.974016 env[1326]: 2025-05-16 00:44:04.959 [INFO][4412] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" HandleID="k8s-pod-network.6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" Workload="localhost-k8s-csi--node--driver--d2kkd-eth0" May 16 00:44:04.974016 env[1326]: 2025-05-16 00:44:04.961 [INFO][4412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:04.974016 env[1326]: 2025-05-16 00:44:04.964 [INFO][4387] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" May 16 00:44:04.974354 systemd[1]: run-netns-cni\x2d62944d6b\x2d760a\x2dd0dc\x2d32eb\x2d44bb118d2600.mount: Deactivated successfully. May 16 00:44:04.975607 env[1326]: time="2025-05-16T00:44:04.975098312Z" level=info msg="TearDown network for sandbox \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\" successfully" May 16 00:44:04.975607 env[1326]: time="2025-05-16T00:44:04.975147631Z" level=info msg="StopPodSandbox for \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\" returns successfully" May 16 00:44:04.976137 env[1326]: time="2025-05-16T00:44:04.976079262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d2kkd,Uid:00bfb5c0-df56-4053-a2df-e7346d66a58a,Namespace:calico-system,Attempt:1,}" May 16 00:44:04.979685 kubelet[2112]: I0516 00:44:04.979106 2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nlbz8" podStartSLOduration=37.979087794 podStartE2EDuration="37.979087794s" podCreationTimestamp="2025-05-16 00:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:44:04.978778557 +0000 UTC m=+45.287400752" watchObservedRunningTime="2025-05-16 00:44:04.979087794 +0000 UTC m=+45.287709949" May 16 00:44:04.996000 audit[4441]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=4441 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:04.996000 audit[4441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe7e44b90 a2=0 a3=1 items=0 ppid=2257 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:04.996000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:05.001000 audit[4441]: NETFILTER_CFG table=nat:114 family=2 entries=14 op=nft_register_rule pid=4441 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:05.001000 audit[4441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe7e44b90 a2=0 a3=1 items=0 ppid=2257 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:05.001000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:05.023000 audit[4463]: NETFILTER_CFG table=filter:115 family=2 entries=17 op=nft_register_rule pid=4463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:05.023000 audit[4463]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd7683490 a2=0 a3=1 items=0 ppid=2257 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:05.023000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:05.028000 audit[4463]: NETFILTER_CFG table=nat:116 family=2 entries=35 op=nft_register_chain pid=4463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:05.028000 audit[4463]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd7683490 a2=0 a3=1 items=0 ppid=2257 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:05.028000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:05.195043 systemd-networkd[1102]: cali81a92ec0250: Link UP May 16 00:44:05.197894 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:44:05.197996 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali81a92ec0250: link becomes ready May 16 00:44:05.198109 systemd-networkd[1102]: cali81a92ec0250: Gained carrier May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:04.974 [INFO][4419] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0 calico-kube-controllers-5648d477c5- calico-system b53c1672-73dc-401f-b6e2-787097ef7c61 1090 0 2025-05-16 00:43:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5648d477c5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5648d477c5-hhnfn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali81a92ec0250 [] [] }} ContainerID="679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" Namespace="calico-system" Pod="calico-kube-controllers-5648d477c5-hhnfn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-" May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:04.974 [INFO][4419] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" Namespace="calico-system" Pod="calico-kube-controllers-5648d477c5-hhnfn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:05.025 [INFO][4438] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" HandleID="k8s-pod-network.679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" Workload="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:05.026 [INFO][4438] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" HandleID="k8s-pod-network.679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" Workload="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cf620), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5648d477c5-hhnfn", "timestamp":"2025-05-16 00:44:05.025926798 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:05.026 [INFO][4438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:05.026 [INFO][4438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:05.026 [INFO][4438] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:05.037 [INFO][4438] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" host="localhost" May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:05.046 [INFO][4438] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:05.051 [INFO][4438] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:05.053 [INFO][4438] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:05.055 [INFO][4438] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:05.055 [INFO][4438] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" host="localhost" May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:05.060 [INFO][4438] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:05.074 [INFO][4438] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" host="localhost" May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:05.185 [INFO][4438] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" host="localhost" May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:05.185 [INFO][4438] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" host="localhost" May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:05.185 [INFO][4438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:05.255153 env[1326]: 2025-05-16 00:44:05.185 [INFO][4438] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" HandleID="k8s-pod-network.679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" Workload="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" May 16 00:44:05.255826 env[1326]: 2025-05-16 00:44:05.192 [INFO][4419] cni-plugin/k8s.go 418: Populated endpoint ContainerID="679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" Namespace="calico-system" Pod="calico-kube-controllers-5648d477c5-hhnfn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0", GenerateName:"calico-kube-controllers-5648d477c5-", Namespace:"calico-system", SelfLink:"", UID:"b53c1672-73dc-401f-b6e2-787097ef7c61", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5648d477c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5648d477c5-hhnfn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81a92ec0250", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:05.255826 env[1326]: 2025-05-16 00:44:05.193 [INFO][4419] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" Namespace="calico-system" Pod="calico-kube-controllers-5648d477c5-hhnfn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" May 16 00:44:05.255826 env[1326]: 2025-05-16 00:44:05.193 [INFO][4419] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81a92ec0250 ContainerID="679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" Namespace="calico-system" Pod="calico-kube-controllers-5648d477c5-hhnfn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" May 16 00:44:05.255826 env[1326]: 2025-05-16 00:44:05.198 [INFO][4419] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" Namespace="calico-system" Pod="calico-kube-controllers-5648d477c5-hhnfn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" May 16 00:44:05.255826 env[1326]: 2025-05-16 00:44:05.198 [INFO][4419] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" Namespace="calico-system" Pod="calico-kube-controllers-5648d477c5-hhnfn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0", GenerateName:"calico-kube-controllers-5648d477c5-", Namespace:"calico-system", SelfLink:"", UID:"b53c1672-73dc-401f-b6e2-787097ef7c61", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5648d477c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a", Pod:"calico-kube-controllers-5648d477c5-hhnfn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81a92ec0250", MAC:"f6:2c:57:1b:ea:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:05.255826 env[1326]: 2025-05-16 00:44:05.248 [INFO][4419] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a" Namespace="calico-system" Pod="calico-kube-controllers-5648d477c5-hhnfn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" May 16 00:44:05.260000 audit[4482]: NETFILTER_CFG table=filter:117 family=2 entries=48 op=nft_register_chain pid=4482 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 16 00:44:05.260000 audit[4482]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23124 a0=3 a1=ffffcdc60740 a2=0 a3=ffffb1f86fa8 items=0 ppid=3600 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:05.260000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 16 00:44:05.308505 systemd-networkd[1102]: calib99d2e79800: Link UP May 16 00:44:05.311261 systemd-networkd[1102]: calib99d2e79800: Gained carrier May 16 00:44:05.312012 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib99d2e79800: link becomes ready May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.031 [INFO][4439] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--d2kkd-eth0 csi-node-driver- calico-system 00bfb5c0-df56-4053-a2df-e7346d66a58a 1092 0 2025-05-16 00:43:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-d2kkd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib99d2e79800 [] [] }} ContainerID="894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" Namespace="calico-system" Pod="csi-node-driver-d2kkd" WorkloadEndpoint="localhost-k8s-csi--node--driver--d2kkd-" May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.031 [INFO][4439] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" Namespace="calico-system" Pod="csi-node-driver-d2kkd" WorkloadEndpoint="localhost-k8s-csi--node--driver--d2kkd-eth0" May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.071 [INFO][4466] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" HandleID="k8s-pod-network.894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" Workload="localhost-k8s-csi--node--driver--d2kkd-eth0" May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.072 [INFO][4466] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" HandleID="k8s-pod-network.894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" Workload="localhost-k8s-csi--node--driver--d2kkd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a6160), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-d2kkd", "timestamp":"2025-05-16 00:44:05.071869376 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.072 [INFO][4466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.186 [INFO][4466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.186 [INFO][4466] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.200 [INFO][4466] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" host="localhost" May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.245 [INFO][4466] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.249 [INFO][4466] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.251 [INFO][4466] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.253 [INFO][4466] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.253 [INFO][4466] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" host="localhost" May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.255 [INFO][4466] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.266 [INFO][4466] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" host="localhost" May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.302 [INFO][4466] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" host="localhost" May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.303 [INFO][4466] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" host="localhost" May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.303 [INFO][4466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:05.327273 env[1326]: 2025-05-16 00:44:05.303 [INFO][4466] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" HandleID="k8s-pod-network.894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" Workload="localhost-k8s-csi--node--driver--d2kkd-eth0" May 16 00:44:05.327945 env[1326]: 2025-05-16 00:44:05.305 [INFO][4439] cni-plugin/k8s.go 418: Populated endpoint ContainerID="894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" Namespace="calico-system" Pod="csi-node-driver-d2kkd" WorkloadEndpoint="localhost-k8s-csi--node--driver--d2kkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d2kkd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00bfb5c0-df56-4053-a2df-e7346d66a58a", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-d2kkd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib99d2e79800", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:05.327945 env[1326]: 2025-05-16 00:44:05.305 [INFO][4439] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" Namespace="calico-system" Pod="csi-node-driver-d2kkd" WorkloadEndpoint="localhost-k8s-csi--node--driver--d2kkd-eth0" May 16 00:44:05.327945 env[1326]: 2025-05-16 00:44:05.305 [INFO][4439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib99d2e79800 ContainerID="894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" Namespace="calico-system" Pod="csi-node-driver-d2kkd" WorkloadEndpoint="localhost-k8s-csi--node--driver--d2kkd-eth0" May 16 00:44:05.327945 env[1326]: 2025-05-16 00:44:05.312 [INFO][4439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" Namespace="calico-system" Pod="csi-node-driver-d2kkd" WorkloadEndpoint="localhost-k8s-csi--node--driver--d2kkd-eth0" May 16 00:44:05.327945 env[1326]: 2025-05-16 00:44:05.312 [INFO][4439] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" Namespace="calico-system" Pod="csi-node-driver-d2kkd" WorkloadEndpoint="localhost-k8s-csi--node--driver--d2kkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d2kkd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00bfb5c0-df56-4053-a2df-e7346d66a58a", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace", Pod:"csi-node-driver-d2kkd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib99d2e79800", MAC:"96:43:11:2a:65:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:05.327945 env[1326]: 2025-05-16 00:44:05.321 [INFO][4439] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace" Namespace="calico-system" Pod="csi-node-driver-d2kkd" WorkloadEndpoint="localhost-k8s-csi--node--driver--d2kkd-eth0" May 16 00:44:05.328717 env[1326]: time="2025-05-16T00:44:05.328647773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:44:05.328866 env[1326]: time="2025-05-16T00:44:05.328841132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:44:05.328958 env[1326]: time="2025-05-16T00:44:05.328936251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:44:05.329382 env[1326]: time="2025-05-16T00:44:05.329336607Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a pid=4490 runtime=io.containerd.runc.v2 May 16 00:44:05.343000 audit[4521]: NETFILTER_CFG table=filter:118 family=2 entries=58 op=nft_register_chain pid=4521 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 16 00:44:05.343000 audit[4521]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27164 a0=3 a1=ffffd7fa3890 a2=0 a3=ffffb8f71fa8 items=0 ppid=3600 pid=4521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:05.343000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 16 00:44:05.345733 env[1326]: time="2025-05-16T00:44:05.345673337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:44:05.345733 env[1326]: time="2025-05-16T00:44:05.345720776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:44:05.345841 env[1326]: time="2025-05-16T00:44:05.345732016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:44:05.345951 env[1326]: time="2025-05-16T00:44:05.345925494Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace pid=4524 runtime=io.containerd.runc.v2 May 16 00:44:05.417760 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:44:05.418897 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:44:05.438452 env[1326]: time="2025-05-16T00:44:05.438402884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d2kkd,Uid:00bfb5c0-df56-4053-a2df-e7346d66a58a,Namespace:calico-system,Attempt:1,} returns sandbox id \"894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace\"" May 16 00:44:05.453843 env[1326]: time="2025-05-16T00:44:05.453800622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5648d477c5-hhnfn,Uid:b53c1672-73dc-401f-b6e2-787097ef7c61,Namespace:calico-system,Attempt:1,} returns sandbox id \"679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a\"" May 16 00:44:05.774674 env[1326]: time="2025-05-16T00:44:05.774625590Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:05.776237 env[1326]: time="2025-05-16T00:44:05.776199816Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:05.778303 env[1326]: time="2025-05-16T00:44:05.778260437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:05.780554 env[1326]: time="2025-05-16T00:44:05.780502896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 16 00:44:05.780901 env[1326]: time="2025-05-16T00:44:05.780868573Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:05.783172 env[1326]: time="2025-05-16T00:44:05.783119232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 16 00:44:05.784366 env[1326]: time="2025-05-16T00:44:05.784173543Z" level=info msg="CreateContainer within sandbox \"a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 00:44:05.792881 env[1326]: time="2025-05-16T00:44:05.792837743Z" level=info msg="CreateContainer within sandbox \"a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e46eccbf488c366e2abc7269ea2aca545da1f168a3e1ac1c959b21d047e77634\"" May 16 00:44:05.793864 env[1326]: time="2025-05-16T00:44:05.793833894Z" level=info msg="StartContainer for \"e46eccbf488c366e2abc7269ea2aca545da1f168a3e1ac1c959b21d047e77634\"" May 16 00:44:05.885383 env[1326]: time="2025-05-16T00:44:05.885329652Z" level=info msg="StartContainer for \"e46eccbf488c366e2abc7269ea2aca545da1f168a3e1ac1c959b21d047e77634\" returns successfully" May 16 00:44:05.932140 systemd-networkd[1102]: caliac4a1c52394: Gained IPv6LL May 16 00:44:05.976999 kubelet[2112]: E0516 00:44:05.973256 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:05.979834 kubelet[2112]: I0516 00:44:05.979776 2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d5c695cc5-mf7nf" podStartSLOduration=25.298476245 podStartE2EDuration="27.979758303s" podCreationTimestamp="2025-05-16 00:43:38 +0000 UTC" firstStartedPulling="2025-05-16 00:44:03.101391018 +0000 UTC m=+43.410013213" lastFinishedPulling="2025-05-16 00:44:05.782673116 +0000 UTC m=+46.091295271" observedRunningTime="2025-05-16 00:44:05.979248468 +0000 UTC m=+46.287870663" watchObservedRunningTime="2025-05-16 00:44:05.979758303 +0000 UTC m=+46.288380498" May 16 00:44:05.996126 systemd-networkd[1102]: cali1f300b9a84d: Gained IPv6LL May 16 00:44:06.001000 audit[4614]: NETFILTER_CFG table=filter:119 family=2 entries=14 op=nft_register_rule pid=4614 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:06.001000 audit[4614]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcd8e0ba0 a2=0 a3=1 items=0 ppid=2257 pid=4614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:06.001000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:06.004404 env[1326]: time="2025-05-16T00:44:06.004364838Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:06.005868 env[1326]: time="2025-05-16T00:44:06.005837184Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:06.007168 env[1326]: time="2025-05-16T00:44:06.007137853Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:06.009254 env[1326]: time="2025-05-16T00:44:06.009216954Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:06.009701 env[1326]: time="2025-05-16T00:44:06.009668510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 16 00:44:06.011195 env[1326]: time="2025-05-16T00:44:06.011087577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 16 00:44:06.012138 env[1326]: time="2025-05-16T00:44:06.012096848Z" level=info msg="CreateContainer within sandbox \"b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 00:44:06.012000 audit[4614]: NETFILTER_CFG table=nat:120 family=2 entries=20 op=nft_register_rule pid=4614 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:06.012000 audit[4614]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcd8e0ba0 a2=0 a3=1 items=0 ppid=2257 pid=4614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:06.012000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:06.026871 env[1326]: time="2025-05-16T00:44:06.026756796Z" level=info msg="CreateContainer within sandbox \"b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6cee93a6610d337a87e3bd007d4c800f699555124d806b2dc4719ab217080e42\"" May 16 00:44:06.027887 env[1326]: time="2025-05-16T00:44:06.027837267Z" level=info msg="StartContainer for \"6cee93a6610d337a87e3bd007d4c800f699555124d806b2dc4719ab217080e42\"" May 16 00:44:06.123937 env[1326]: time="2025-05-16T00:44:06.123863723Z" level=info msg="StartContainer for \"6cee93a6610d337a87e3bd007d4c800f699555124d806b2dc4719ab217080e42\" returns successfully" May 16 00:44:06.444075 systemd-networkd[1102]: cali81a92ec0250: Gained IPv6LL May 16 00:44:06.768398 env[1326]: time="2025-05-16T00:44:06.768298409Z" level=info msg="StopPodSandbox for \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\"" May 16 00:44:06.877717 env[1326]: 2025-05-16 00:44:06.835 [INFO][4667] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" May 16 00:44:06.877717 env[1326]: 2025-05-16 00:44:06.836 [INFO][4667] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" iface="eth0" netns="/var/run/netns/cni-c4f71c4b-ac4a-f654-b980-f5b4612266b1" May 16 00:44:06.877717 env[1326]: 2025-05-16 00:44:06.836 [INFO][4667] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" iface="eth0" netns="/var/run/netns/cni-c4f71c4b-ac4a-f654-b980-f5b4612266b1" May 16 00:44:06.877717 env[1326]: 2025-05-16 00:44:06.836 [INFO][4667] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" iface="eth0" netns="/var/run/netns/cni-c4f71c4b-ac4a-f654-b980-f5b4612266b1" May 16 00:44:06.877717 env[1326]: 2025-05-16 00:44:06.836 [INFO][4667] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" May 16 00:44:06.877717 env[1326]: 2025-05-16 00:44:06.836 [INFO][4667] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" May 16 00:44:06.877717 env[1326]: 2025-05-16 00:44:06.862 [INFO][4676] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" HandleID="k8s-pod-network.e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" Workload="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" May 16 00:44:06.877717 env[1326]: 2025-05-16 00:44:06.862 [INFO][4676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:06.877717 env[1326]: 2025-05-16 00:44:06.862 [INFO][4676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:06.877717 env[1326]: 2025-05-16 00:44:06.871 [WARNING][4676] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" HandleID="k8s-pod-network.e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" Workload="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" May 16 00:44:06.877717 env[1326]: 2025-05-16 00:44:06.872 [INFO][4676] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" HandleID="k8s-pod-network.e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" Workload="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" May 16 00:44:06.877717 env[1326]: 2025-05-16 00:44:06.873 [INFO][4676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:06.877717 env[1326]: 2025-05-16 00:44:06.875 [INFO][4667] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" May 16 00:44:06.879448 env[1326]: time="2025-05-16T00:44:06.877864744Z" level=info msg="TearDown network for sandbox \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\" successfully" May 16 00:44:06.879448 env[1326]: time="2025-05-16T00:44:06.877899183Z" level=info msg="StopPodSandbox for \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\" returns successfully" May 16 00:44:06.879500 kubelet[2112]: E0516 00:44:06.878222 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:06.880266 systemd[1]: run-netns-cni\x2dc4f71c4b\x2dac4a\x2df654\x2db980\x2df5b4612266b1.mount: Deactivated successfully. May 16 00:44:06.881840 env[1326]: time="2025-05-16T00:44:06.881803548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f65v2,Uid:65a002c7-112a-4b0f-8977-55ccbd8ecc6b,Namespace:kube-system,Attempt:1,}" May 16 00:44:06.984323 kubelet[2112]: E0516 00:44:06.982590 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:06.997415 kubelet[2112]: I0516 00:44:06.995074 2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d5c695cc5-gsjpm" podStartSLOduration=27.301105623 podStartE2EDuration="28.99505665s" podCreationTimestamp="2025-05-16 00:43:38 +0000 UTC" firstStartedPulling="2025-05-16 00:44:04.316958512 +0000 UTC m=+44.625580707" lastFinishedPulling="2025-05-16 00:44:06.010909539 +0000 UTC m=+46.319531734" observedRunningTime="2025-05-16 00:44:06.99503101 +0000 UTC m=+47.303653205" watchObservedRunningTime="2025-05-16 00:44:06.99505665 +0000 UTC m=+47.303678805" May 16 00:44:07.008000 audit[4705]: NETFILTER_CFG table=filter:121 family=2 entries=14 op=nft_register_rule pid=4705 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:07.010213 kernel: kauditd_printk_skb: 43 callbacks suppressed May 16 00:44:07.010305 kernel: audit: type=1325 audit(1747356247.008:432): table=filter:121 family=2 entries=14 op=nft_register_rule pid=4705 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:07.008000 audit[4705]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe43d48c0 a2=0 a3=1 items=0 ppid=2257 pid=4705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:07.015810 kernel: audit: type=1300 audit(1747356247.008:432): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe43d48c0 a2=0 a3=1 items=0 ppid=2257 pid=4705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:07.008000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:07.020957 kernel: audit: type=1327 audit(1747356247.008:432): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:07.022000 audit[4705]: NETFILTER_CFG table=nat:122 family=2 entries=20 op=nft_register_rule pid=4705 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:07.022000 audit[4705]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe43d48c0 a2=0 a3=1 items=0 ppid=2257 pid=4705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:07.033017 kernel: audit: type=1325 audit(1747356247.022:433): table=nat:122 family=2 entries=20 op=nft_register_rule pid=4705 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:07.033116 kernel: audit: type=1300 audit(1747356247.022:433): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe43d48c0 a2=0 a3=1 items=0 ppid=2257 pid=4705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:07.022000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:07.036899 kernel: audit: type=1327 audit(1747356247.022:433): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:07.109699 systemd-networkd[1102]: calic41d8f6b2da: Link UP May 16 00:44:07.111369 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:44:07.111533 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic41d8f6b2da: link becomes ready May 16 00:44:07.111458 systemd-networkd[1102]: calic41d8f6b2da: Gained carrier May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:06.971 [INFO][4685] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0 coredns-7c65d6cfc9- kube-system 65a002c7-112a-4b0f-8977-55ccbd8ecc6b 1135 0 2025-05-16 00:43:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-f65v2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic41d8f6b2da [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f65v2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f65v2-" May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:06.972 [INFO][4685] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f65v2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:07.053 [INFO][4699] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" HandleID="k8s-pod-network.eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" Workload="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:07.053 [INFO][4699] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" HandleID="k8s-pod-network.eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" Workload="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004f8480), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-f65v2", "timestamp":"2025-05-16 00:44:07.053248977 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:07.053 [INFO][4699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:07.053 [INFO][4699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:07.053 [INFO][4699] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:07.064 [INFO][4699] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" host="localhost" May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:07.069 [INFO][4699] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:07.076 [INFO][4699] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:07.082 [INFO][4699] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:07.090 [INFO][4699] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:07.090 [INFO][4699] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" host="localhost" May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:07.093 [INFO][4699] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168 May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:07.097 [INFO][4699] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" host="localhost" May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:07.104 [INFO][4699] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" host="localhost" May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:07.104 [INFO][4699] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" host="localhost" May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:07.104 [INFO][4699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:07.132443 env[1326]: 2025-05-16 00:44:07.104 [INFO][4699] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" HandleID="k8s-pod-network.eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" Workload="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" May 16 00:44:07.134762 env[1326]: 2025-05-16 00:44:07.107 [INFO][4685] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f65v2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"65a002c7-112a-4b0f-8977-55ccbd8ecc6b", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-f65v2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic41d8f6b2da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:07.134762 env[1326]: 2025-05-16 00:44:07.108 [INFO][4685] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f65v2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" May 16 00:44:07.134762 env[1326]: 2025-05-16 00:44:07.108 [INFO][4685] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic41d8f6b2da ContainerID="eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f65v2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" May 16 00:44:07.134762 env[1326]: 2025-05-16 00:44:07.112 [INFO][4685] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f65v2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" May 16 00:44:07.134762 env[1326]: 2025-05-16 00:44:07.114 [INFO][4685] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f65v2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"65a002c7-112a-4b0f-8977-55ccbd8ecc6b", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168", Pod:"coredns-7c65d6cfc9-f65v2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic41d8f6b2da", MAC:"a6:4b:a2:7b:46:08", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:07.134762 env[1326]: 2025-05-16 00:44:07.129 [INFO][4685] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f65v2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" May 16 00:44:07.149118 systemd-networkd[1102]: calib99d2e79800: Gained IPv6LL May 16 00:44:07.151562 env[1326]: time="2025-05-16T00:44:07.151448353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:44:07.151562 env[1326]: time="2025-05-16T00:44:07.151491313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:44:07.151562 env[1326]: time="2025-05-16T00:44:07.151501833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:44:07.152910 env[1326]: time="2025-05-16T00:44:07.152859701Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168 pid=4723 runtime=io.containerd.runc.v2 May 16 00:44:07.159000 audit[4732]: NETFILTER_CFG table=filter:123 family=2 entries=48 op=nft_register_chain pid=4732 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 16 00:44:07.159000 audit[4732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22688 a0=3 a1=ffffd6f81f20 a2=0 a3=ffff826d8fa8 items=0 ppid=3600 pid=4732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:07.166283 kernel: audit: type=1325 audit(1747356247.159:434): table=filter:123 family=2 entries=48 op=nft_register_chain pid=4732 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 16 00:44:07.166416 kernel: audit: type=1300 audit(1747356247.159:434): arch=c00000b7 syscall=211 success=yes exit=22688 a0=3 a1=ffffd6f81f20 a2=0 a3=ffff826d8fa8 items=0 ppid=3600 pid=4732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:07.159000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 16 00:44:07.168347 kernel: audit: type=1327 audit(1747356247.159:434): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 16 00:44:07.196262 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:44:07.227261 env[1326]: time="2025-05-16T00:44:07.227208967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f65v2,Uid:65a002c7-112a-4b0f-8977-55ccbd8ecc6b,Namespace:kube-system,Attempt:1,} returns sandbox id \"eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168\"" May 16 00:44:07.227930 kubelet[2112]: E0516 00:44:07.227905 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:07.229805 env[1326]: time="2025-05-16T00:44:07.229763624Z" level=info msg="CreateContainer within sandbox \"eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:44:07.475284 env[1326]: time="2025-05-16T00:44:07.475156466Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:07.483209 env[1326]: time="2025-05-16T00:44:07.483150915Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:07.485173 env[1326]: time="2025-05-16T00:44:07.485127298Z" level=info msg="CreateContainer within sandbox \"eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b026648eea5fe3d39431539ee9c4879cc435e462ffeb5c858a2191bfa71ba39\"" May 16 00:44:07.485524 env[1326]: time="2025-05-16T00:44:07.485459375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:07.485701 env[1326]: time="2025-05-16T00:44:07.485673733Z" level=info msg="StartContainer for \"5b026648eea5fe3d39431539ee9c4879cc435e462ffeb5c858a2191bfa71ba39\"" May 16 00:44:07.491726 env[1326]: time="2025-05-16T00:44:07.491692760Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:07.492150 env[1326]: time="2025-05-16T00:44:07.492122716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\"" May 16 00:44:07.494215 env[1326]: time="2025-05-16T00:44:07.494168298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 16 00:44:07.494396 env[1326]: time="2025-05-16T00:44:07.494366577Z" level=info msg="CreateContainer within sandbox \"894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 16 00:44:07.566482 env[1326]: time="2025-05-16T00:44:07.566437863Z" level=info msg="CreateContainer within sandbox \"894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"61e5798280526185e38c3c1fddfdd637c562fec17e5d1e4137ded2a20b4eba98\"" May 16 00:44:07.567202 env[1326]: time="2025-05-16T00:44:07.567158856Z" level=info msg="StartContainer for \"5b026648eea5fe3d39431539ee9c4879cc435e462ffeb5c858a2191bfa71ba39\" returns successfully" May 16 00:44:07.567490 env[1326]: time="2025-05-16T00:44:07.567467734Z" level=info msg="StartContainer for \"61e5798280526185e38c3c1fddfdd637c562fec17e5d1e4137ded2a20b4eba98\"" May 16 00:44:07.639505 env[1326]: time="2025-05-16T00:44:07.639445460Z" level=info msg="StartContainer for \"61e5798280526185e38c3c1fddfdd637c562fec17e5d1e4137ded2a20b4eba98\" returns successfully" May 16 00:44:07.987572 kubelet[2112]: I0516 00:44:07.986463 2112 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:44:07.987572 kubelet[2112]: E0516 00:44:07.987044 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:07.998339 kubelet[2112]: I0516 00:44:07.998272 2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-f65v2" podStartSLOduration=40.998254064 podStartE2EDuration="40.998254064s" podCreationTimestamp="2025-05-16 00:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:44:07.998086186 +0000 UTC m=+48.306708381" watchObservedRunningTime="2025-05-16 00:44:07.998254064 +0000 UTC m=+48.306876259" May 16 00:44:08.017000 audit[4841]: NETFILTER_CFG table=filter:124 family=2 entries=14 op=nft_register_rule pid=4841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:08.017000 audit[4841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffebb0ed50 a2=0 a3=1 items=0 ppid=2257 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:08.017000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:08.021972 kernel: audit: type=1325 audit(1747356248.017:435): table=filter:124 family=2 entries=14 op=nft_register_rule pid=4841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:08.029000 audit[4841]: NETFILTER_CFG table=nat:125 family=2 entries=44 op=nft_register_rule pid=4841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:08.029000 audit[4841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffebb0ed50 a2=0 a3=1 items=0 ppid=2257 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:08.029000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:08.428084 systemd-networkd[1102]: calic41d8f6b2da: Gained IPv6LL May 16 00:44:08.989503 kubelet[2112]: E0516 00:44:08.989151 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:09.044000 audit[4844]: NETFILTER_CFG table=filter:126 family=2 entries=13 op=nft_register_rule pid=4844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:09.044000 audit[4844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=fffffc26c400 a2=0 a3=1 items=0 ppid=2257 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:09.044000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:09.067000 audit[4844]: NETFILTER_CFG table=nat:127 family=2 entries=63 op=nft_register_chain pid=4844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:09.067000 audit[4844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23436 a0=3 a1=fffffc26c400 a2=0 a3=1 items=0 ppid=2257 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:09.067000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:09.312519 systemd[1]: Started sshd@9-10.0.0.81:22-10.0.0.1:48710.service. May 16 00:44:09.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.81:22-10.0.0.1:48710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.362414 sshd[4846]: Accepted publickey for core from 10.0.0.1 port 48710 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:09.361000 audit[4846]: USER_ACCT pid=4846 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:09.363000 audit[4846]: CRED_ACQ pid=4846 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:09.363000 audit[4846]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeef24d70 a2=3 a3=1 items=0 ppid=1 pid=4846 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:09.363000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 16 00:44:09.364555 sshd[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:09.369474 systemd-logind[1310]: New session 10 of user core. May 16 00:44:09.370452 systemd[1]: Started session-10.scope. May 16 00:44:09.379000 audit[4846]: USER_START pid=4846 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:09.381000 audit[4849]: CRED_ACQ pid=4849 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:09.615000 audit[4846]: USER_END pid=4846 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:09.616000 audit[4846]: CRED_DISP pid=4846 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:09.615311 sshd[4846]: pam_unix(sshd:session): session closed for user core May 16 00:44:09.617912 systemd[1]: Started sshd@10-10.0.0.81:22-10.0.0.1:48714.service. May 16 00:44:09.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.81:22-10.0.0.1:48714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.628514 systemd[1]: sshd@9-10.0.0.81:22-10.0.0.1:48710.service: Deactivated successfully. May 16 00:44:09.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.81:22-10.0.0.1:48710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.629877 systemd-logind[1310]: Session 10 logged out. Waiting for processes to exit. May 16 00:44:09.629951 systemd[1]: session-10.scope: Deactivated successfully. May 16 00:44:09.631073 systemd-logind[1310]: Removed session 10. May 16 00:44:09.660000 audit[4859]: USER_ACCT pid=4859 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:09.661893 sshd[4859]: Accepted publickey for core from 10.0.0.1 port 48714 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:09.661000 audit[4859]: CRED_ACQ pid=4859 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:09.662000 audit[4859]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeeae2550 a2=3 a3=1 items=0 ppid=1 pid=4859 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:09.662000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 16 00:44:09.663607 sshd[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:09.667517 systemd-logind[1310]: New session 11 of user core. May 16 00:44:09.667957 systemd[1]: Started session-11.scope. May 16 00:44:09.670000 audit[4859]: USER_START pid=4859 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:09.671000 audit[4864]: CRED_ACQ pid=4864 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:09.955737 systemd[1]: Started sshd@11-10.0.0.81:22-10.0.0.1:48722.service. May 16 00:44:09.953175 sshd[4859]: pam_unix(sshd:session): session closed for user core May 16 00:44:09.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.81:22-10.0.0.1:48722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.963000 audit[4859]: USER_END pid=4859 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:09.963000 audit[4859]: CRED_DISP pid=4859 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:09.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.81:22-10.0.0.1:48714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.966460 systemd[1]: sshd@10-10.0.0.81:22-10.0.0.1:48714.service: Deactivated successfully. May 16 00:44:09.967921 systemd-logind[1310]: Session 11 logged out. Waiting for processes to exit. May 16 00:44:09.968003 systemd[1]: session-11.scope: Deactivated successfully. May 16 00:44:09.968811 systemd-logind[1310]: Removed session 11. May 16 00:44:09.992883 kubelet[2112]: E0516 00:44:09.992843 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:10.027000 audit[4871]: USER_ACCT pid=4871 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:10.028441 sshd[4871]: Accepted publickey for core from 10.0.0.1 port 48722 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:10.028000 audit[4871]: CRED_ACQ pid=4871 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:10.028000 audit[4871]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff93282b0 a2=3 a3=1 items=0 ppid=1 pid=4871 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:10.028000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 16 00:44:10.029765 sshd[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:10.033434 systemd-logind[1310]: New session 12 of user core. May 16 00:44:10.034273 systemd[1]: Started session-12.scope. May 16 00:44:10.036000 audit[4871]: USER_START pid=4871 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:10.038000 audit[4876]: CRED_ACQ pid=4876 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:10.120327 env[1326]: time="2025-05-16T00:44:10.120267077Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:10.121788 env[1326]: time="2025-05-16T00:44:10.121716185Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:10.123350 env[1326]: time="2025-05-16T00:44:10.123319332Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:10.125101 env[1326]: time="2025-05-16T00:44:10.125064117Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:10.125727 env[1326]: time="2025-05-16T00:44:10.125694032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\"" May 16 00:44:10.129896 env[1326]: time="2025-05-16T00:44:10.129844038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 16 00:44:10.142469 env[1326]: time="2025-05-16T00:44:10.142423054Z" level=info msg="CreateContainer within sandbox \"679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 16 00:44:10.156119 env[1326]: time="2025-05-16T00:44:10.156066261Z" level=info msg="CreateContainer within sandbox \"679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3a75f63234029a7c04e98c70562aa87698ccb32a0581bfbe3c737af9f4ade824\"" May 16 00:44:10.156867 env[1326]: time="2025-05-16T00:44:10.156831534Z" level=info msg="StartContainer for \"3a75f63234029a7c04e98c70562aa87698ccb32a0581bfbe3c737af9f4ade824\"" May 16 00:44:10.262954 sshd[4871]: pam_unix(sshd:session): session closed for user core May 16 00:44:10.263000 audit[4871]: USER_END pid=4871 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:10.263000 audit[4871]: CRED_DISP pid=4871 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:10.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.81:22-10.0.0.1:48722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:10.265974 systemd[1]: sshd@11-10.0.0.81:22-10.0.0.1:48722.service: Deactivated successfully. May 16 00:44:10.266951 systemd[1]: session-12.scope: Deactivated successfully. May 16 00:44:10.267289 systemd-logind[1310]: Session 12 logged out. Waiting for processes to exit. May 16 00:44:10.268701 systemd-logind[1310]: Removed session 12. May 16 00:44:10.270284 env[1326]: time="2025-05-16T00:44:10.270207556Z" level=info msg="StartContainer for \"3a75f63234029a7c04e98c70562aa87698ccb32a0581bfbe3c737af9f4ade824\" returns successfully" May 16 00:44:11.400633 env[1326]: time="2025-05-16T00:44:11.400580815Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:11.402069 env[1326]: time="2025-05-16T00:44:11.402042043Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:11.403761 env[1326]: time="2025-05-16T00:44:11.403717110Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:11.405077 env[1326]: time="2025-05-16T00:44:11.405036539Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:11.406042 env[1326]: time="2025-05-16T00:44:11.406009171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\"" May 16 00:44:11.412271 env[1326]: time="2025-05-16T00:44:11.412232600Z" level=info msg="CreateContainer within sandbox \"894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 16 00:44:11.427291 env[1326]: time="2025-05-16T00:44:11.427227518Z" level=info msg="CreateContainer within sandbox \"894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"352a143ace9820cf48e759c2f3d3c7d90ee0493fbeea47924e10b0fd02447fad\"" May 16 00:44:11.428031 env[1326]: time="2025-05-16T00:44:11.427952713Z" level=info msg="StartContainer for \"352a143ace9820cf48e759c2f3d3c7d90ee0493fbeea47924e10b0fd02447fad\"" May 16 00:44:11.498165 env[1326]: time="2025-05-16T00:44:11.498102542Z" level=info msg="StartContainer for \"352a143ace9820cf48e759c2f3d3c7d90ee0493fbeea47924e10b0fd02447fad\" returns successfully" May 16 00:44:11.892079 kubelet[2112]: I0516 00:44:11.892034 2112 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 16 00:44:11.894879 kubelet[2112]: I0516 00:44:11.894847 2112 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 16 00:44:12.000111 kubelet[2112]: I0516 00:44:12.000085 2112 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:44:12.010615 kubelet[2112]: I0516 00:44:12.010559 2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5648d477c5-hhnfn" podStartSLOduration=25.337556214 podStartE2EDuration="30.010540217s" podCreationTimestamp="2025-05-16 00:43:42 +0000 UTC" firstStartedPulling="2025-05-16 00:44:05.455427007 +0000 UTC m=+45.764049202" lastFinishedPulling="2025-05-16 00:44:10.12841105 +0000 UTC m=+50.437033205" observedRunningTime="2025-05-16 00:44:11.009005359 +0000 UTC m=+51.317627554" watchObservedRunningTime="2025-05-16 00:44:12.010540217 +0000 UTC m=+52.319162412" May 16 00:44:12.768533 env[1326]: time="2025-05-16T00:44:12.768485762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 00:44:12.921836 env[1326]: time="2025-05-16T00:44:12.921766777Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 16 00:44:12.922747 env[1326]: time="2025-05-16T00:44:12.922704690Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 16 00:44:12.922984 kubelet[2112]: E0516 00:44:12.922917 2112 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:44:12.923267 kubelet[2112]: E0516 00:44:12.922992 2112 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:44:12.923267 kubelet[2112]: E0516 00:44:12.923090 2112 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3c207aa3ffe84b5fbd162d746ecae6bd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fk5t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-788588bcd7-c7kkr_calico-system(083ed115-d2c3-4e81-b1aa-73fbcace47ab): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 16 00:44:12.925520 env[1326]: time="2025-05-16T00:44:12.925484068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 00:44:13.073027 env[1326]: time="2025-05-16T00:44:13.072885260Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 16 00:44:13.076067 env[1326]: time="2025-05-16T00:44:13.076020835Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 16 00:44:13.076310 kubelet[2112]: E0516 00:44:13.076268 2112 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:44:13.076367 kubelet[2112]: E0516 00:44:13.076321 2112 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:44:13.076500 kubelet[2112]: E0516 00:44:13.076435 2112 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk5t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-788588bcd7-c7kkr_calico-system(083ed115-d2c3-4e81-b1aa-73fbcace47ab): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 16 00:44:13.077604 kubelet[2112]: E0516 00:44:13.077548 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-788588bcd7-c7kkr" podUID="083ed115-d2c3-4e81-b1aa-73fbcace47ab" May 16 00:44:14.771014 env[1326]: time="2025-05-16T00:44:14.770950414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 00:44:14.918415 env[1326]: time="2025-05-16T00:44:14.918330395Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 16 00:44:14.919285 env[1326]: time="2025-05-16T00:44:14.919221788Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 16 00:44:14.919557 kubelet[2112]: E0516 00:44:14.919510 2112 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:44:14.919825 kubelet[2112]: E0516 00:44:14.919568 2112 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:44:14.919825 kubelet[2112]: E0516 00:44:14.919685 2112 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gz7jz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-kxr6q_calico-system(74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 16 00:44:14.920903 kubelet[2112]: E0516 00:44:14.920864 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-kxr6q" podUID="74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82" May 16 00:44:15.134574 kubelet[2112]: I0516 00:44:15.134541 2112 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:44:15.156344 systemd[1]: run-containerd-runc-k8s.io-3a75f63234029a7c04e98c70562aa87698ccb32a0581bfbe3c737af9f4ade824-runc.GvpbYp.mount: Deactivated successfully. May 16 00:44:15.197207 kubelet[2112]: I0516 00:44:15.197149 2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-d2kkd" podStartSLOduration=27.22740075 podStartE2EDuration="33.197118422s" podCreationTimestamp="2025-05-16 00:43:42 +0000 UTC" firstStartedPulling="2025-05-16 00:44:05.440008869 +0000 UTC m=+45.748631064" lastFinishedPulling="2025-05-16 00:44:11.409726581 +0000 UTC m=+51.718348736" observedRunningTime="2025-05-16 00:44:12.011514209 +0000 UTC m=+52.320136364" watchObservedRunningTime="2025-05-16 00:44:15.197118422 +0000 UTC m=+55.505740577" May 16 00:44:15.208180 systemd[1]: run-containerd-runc-k8s.io-3a75f63234029a7c04e98c70562aa87698ccb32a0581bfbe3c737af9f4ade824-runc.1oHWsm.mount: Deactivated successfully. May 16 00:44:15.265789 systemd[1]: Started sshd@12-10.0.0.81:22-10.0.0.1:38682.service. May 16 00:44:15.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.81:22-10.0.0.1:38682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:15.268906 kernel: kauditd_printk_skb: 44 callbacks suppressed May 16 00:44:15.269052 kernel: audit: type=1130 audit(1747356255.264:466): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.81:22-10.0.0.1:38682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:15.308000 audit[5019]: USER_ACCT pid=5019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:15.309594 sshd[5019]: Accepted publickey for core from 10.0.0.1 port 38682 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:15.311594 sshd[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:15.310000 audit[5019]: CRED_ACQ pid=5019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:15.314179 kernel: audit: type=1101 audit(1747356255.308:467): pid=5019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:15.314247 kernel: audit: type=1103 audit(1747356255.310:468): pid=5019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:15.315728 kernel: audit: type=1006 audit(1747356255.310:469): pid=5019 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 May 16 00:44:15.315787 kernel: audit: type=1300 audit(1747356255.310:469): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd73cf060 a2=3 a3=1 items=0 ppid=1 pid=5019 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:15.310000 audit[5019]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd73cf060 a2=3 a3=1 items=0 ppid=1 pid=5019 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:15.315714 systemd[1]: Started session-13.scope. May 16 00:44:15.315949 systemd-logind[1310]: New session 13 of user core. May 16 00:44:15.310000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 16 00:44:15.319025 kernel: audit: type=1327 audit(1747356255.310:469): proctitle=737368643A20636F7265205B707269765D May 16 00:44:15.318000 audit[5019]: USER_START pid=5019 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:15.322525 kernel: audit: type=1105 audit(1747356255.318:470): pid=5019 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:15.320000 audit[5022]: CRED_ACQ pid=5022 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:15.324804 kernel: audit: type=1103 audit(1747356255.320:471): pid=5022 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:15.483787 sshd[5019]: pam_unix(sshd:session): session closed for user core May 16 00:44:15.483000 audit[5019]: USER_END pid=5019 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:15.486993 systemd[1]: sshd@12-10.0.0.81:22-10.0.0.1:38682.service: Deactivated successfully. May 16 00:44:15.483000 audit[5019]: CRED_DISP pid=5019 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:15.487804 systemd[1]: session-13.scope: Deactivated successfully. May 16 00:44:15.489596 systemd-logind[1310]: Session 13 logged out. Waiting for processes to exit. May 16 00:44:15.489761 kernel: audit: type=1106 audit(1747356255.483:472): pid=5019 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:15.489796 kernel: audit: type=1104 audit(1747356255.483:473): pid=5019 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:15.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.81:22-10.0.0.1:38682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:15.490472 systemd-logind[1310]: Removed session 13. May 16 00:44:19.764331 env[1326]: time="2025-05-16T00:44:19.764285730Z" level=info msg="StopPodSandbox for \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\"" May 16 00:44:19.849184 env[1326]: 2025-05-16 00:44:19.803 [WARNING][5051] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fe218a81-50db-479d-bb87-757c8c52f897", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba", Pod:"coredns-7c65d6cfc9-nlbz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac4a1c52394", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:19.849184 env[1326]: 2025-05-16 00:44:19.803 [INFO][5051] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" May 16 00:44:19.849184 env[1326]: 2025-05-16 00:44:19.803 [INFO][5051] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" iface="eth0" netns="" May 16 00:44:19.849184 env[1326]: 2025-05-16 00:44:19.803 [INFO][5051] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" May 16 00:44:19.849184 env[1326]: 2025-05-16 00:44:19.803 [INFO][5051] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" May 16 00:44:19.849184 env[1326]: 2025-05-16 00:44:19.826 [INFO][5062] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" HandleID="k8s-pod-network.527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" Workload="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" May 16 00:44:19.849184 env[1326]: 2025-05-16 00:44:19.826 [INFO][5062] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:19.849184 env[1326]: 2025-05-16 00:44:19.826 [INFO][5062] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:19.849184 env[1326]: 2025-05-16 00:44:19.840 [WARNING][5062] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" HandleID="k8s-pod-network.527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" Workload="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" May 16 00:44:19.849184 env[1326]: 2025-05-16 00:44:19.840 [INFO][5062] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" HandleID="k8s-pod-network.527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" Workload="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" May 16 00:44:19.849184 env[1326]: 2025-05-16 00:44:19.841 [INFO][5062] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:19.849184 env[1326]: 2025-05-16 00:44:19.847 [INFO][5051] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" May 16 00:44:19.849781 env[1326]: time="2025-05-16T00:44:19.849216597Z" level=info msg="TearDown network for sandbox \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\" successfully" May 16 00:44:19.849781 env[1326]: time="2025-05-16T00:44:19.849248837Z" level=info msg="StopPodSandbox for \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\" returns successfully" May 16 00:44:19.849781 env[1326]: time="2025-05-16T00:44:19.849723834Z" level=info msg="RemovePodSandbox for \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\"" May 16 00:44:19.849781 env[1326]: time="2025-05-16T00:44:19.849755273Z" level=info msg="Forcibly stopping sandbox \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\"" May 16 00:44:19.917280 env[1326]: 2025-05-16 00:44:19.883 [WARNING][5079] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fe218a81-50db-479d-bb87-757c8c52f897", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3fd4a46f294790933a58d02a01e6aecac7d8512e124d2f68e08cf7994ba4cba", Pod:"coredns-7c65d6cfc9-nlbz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac4a1c52394", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:19.917280 env[1326]: 2025-05-16 00:44:19.883 [INFO][5079] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" May 16 00:44:19.917280 env[1326]: 2025-05-16 00:44:19.883 [INFO][5079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" iface="eth0" netns="" May 16 00:44:19.917280 env[1326]: 2025-05-16 00:44:19.883 [INFO][5079] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" May 16 00:44:19.917280 env[1326]: 2025-05-16 00:44:19.883 [INFO][5079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" May 16 00:44:19.917280 env[1326]: 2025-05-16 00:44:19.903 [INFO][5087] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" HandleID="k8s-pod-network.527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" Workload="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" May 16 00:44:19.917280 env[1326]: 2025-05-16 00:44:19.903 [INFO][5087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:19.917280 env[1326]: 2025-05-16 00:44:19.903 [INFO][5087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:19.917280 env[1326]: 2025-05-16 00:44:19.912 [WARNING][5087] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" HandleID="k8s-pod-network.527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" Workload="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" May 16 00:44:19.917280 env[1326]: 2025-05-16 00:44:19.912 [INFO][5087] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" HandleID="k8s-pod-network.527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" Workload="localhost-k8s-coredns--7c65d6cfc9--nlbz8-eth0" May 16 00:44:19.917280 env[1326]: 2025-05-16 00:44:19.913 [INFO][5087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:19.917280 env[1326]: 2025-05-16 00:44:19.915 [INFO][5079] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5" May 16 00:44:19.917715 env[1326]: time="2025-05-16T00:44:19.917315346Z" level=info msg="TearDown network for sandbox \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\" successfully" May 16 00:44:19.924152 env[1326]: time="2025-05-16T00:44:19.924106097Z" level=info msg="RemovePodSandbox \"527ba30586ca511b23705e09229493b02063cc89e1691ca9672939754ac693a5\" returns successfully" May 16 00:44:19.924637 env[1326]: time="2025-05-16T00:44:19.924604973Z" level=info msg="StopPodSandbox for \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\"" May 16 00:44:19.987319 env[1326]: 2025-05-16 00:44:19.955 [WARNING][5105] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"65a002c7-112a-4b0f-8977-55ccbd8ecc6b", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168", Pod:"coredns-7c65d6cfc9-f65v2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic41d8f6b2da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:19.987319 env[1326]: 2025-05-16 00:44:19.955 [INFO][5105] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" May 16 00:44:19.987319 env[1326]: 2025-05-16 00:44:19.955 [INFO][5105] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" iface="eth0" netns="" May 16 00:44:19.987319 env[1326]: 2025-05-16 00:44:19.955 [INFO][5105] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" May 16 00:44:19.987319 env[1326]: 2025-05-16 00:44:19.955 [INFO][5105] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" May 16 00:44:19.987319 env[1326]: 2025-05-16 00:44:19.972 [INFO][5114] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" HandleID="k8s-pod-network.e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" Workload="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" May 16 00:44:19.987319 env[1326]: 2025-05-16 00:44:19.972 [INFO][5114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:19.987319 env[1326]: 2025-05-16 00:44:19.972 [INFO][5114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:19.987319 env[1326]: 2025-05-16 00:44:19.982 [WARNING][5114] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" HandleID="k8s-pod-network.e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" Workload="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" May 16 00:44:19.987319 env[1326]: 2025-05-16 00:44:19.982 [INFO][5114] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" HandleID="k8s-pod-network.e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" Workload="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" May 16 00:44:19.987319 env[1326]: 2025-05-16 00:44:19.983 [INFO][5114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:19.987319 env[1326]: 2025-05-16 00:44:19.985 [INFO][5105] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" May 16 00:44:19.987746 env[1326]: time="2025-05-16T00:44:19.987350120Z" level=info msg="TearDown network for sandbox \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\" successfully" May 16 00:44:19.987746 env[1326]: time="2025-05-16T00:44:19.987382600Z" level=info msg="StopPodSandbox for \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\" returns successfully" May 16 00:44:19.988186 env[1326]: time="2025-05-16T00:44:19.988161354Z" level=info msg="RemovePodSandbox for \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\"" May 16 00:44:19.988241 env[1326]: time="2025-05-16T00:44:19.988198434Z" level=info msg="Forcibly stopping sandbox \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\"" May 16 00:44:20.063030 env[1326]: 2025-05-16 00:44:20.028 [WARNING][5132] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"65a002c7-112a-4b0f-8977-55ccbd8ecc6b", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eab5d3a8aeebfc184fc4be027b08e09ad57ad2ff0cb5311dc124e3dd5218b168", Pod:"coredns-7c65d6cfc9-f65v2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic41d8f6b2da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:20.063030 env[1326]: 2025-05-16 00:44:20.029 [INFO][5132] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" May 16 00:44:20.063030 env[1326]: 2025-05-16 00:44:20.029 [INFO][5132] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" iface="eth0" netns="" May 16 00:44:20.063030 env[1326]: 2025-05-16 00:44:20.029 [INFO][5132] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" May 16 00:44:20.063030 env[1326]: 2025-05-16 00:44:20.029 [INFO][5132] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" May 16 00:44:20.063030 env[1326]: 2025-05-16 00:44:20.047 [INFO][5140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" HandleID="k8s-pod-network.e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" Workload="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" May 16 00:44:20.063030 env[1326]: 2025-05-16 00:44:20.047 [INFO][5140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:20.063030 env[1326]: 2025-05-16 00:44:20.047 [INFO][5140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:20.063030 env[1326]: 2025-05-16 00:44:20.056 [WARNING][5140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" HandleID="k8s-pod-network.e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" Workload="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" May 16 00:44:20.063030 env[1326]: 2025-05-16 00:44:20.056 [INFO][5140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" HandleID="k8s-pod-network.e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" Workload="localhost-k8s-coredns--7c65d6cfc9--f65v2-eth0" May 16 00:44:20.063030 env[1326]: 2025-05-16 00:44:20.058 [INFO][5140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:20.063030 env[1326]: 2025-05-16 00:44:20.060 [INFO][5132] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad" May 16 00:44:20.063030 env[1326]: time="2025-05-16T00:44:20.061709309Z" level=info msg="TearDown network for sandbox \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\" successfully" May 16 00:44:20.064778 env[1326]: time="2025-05-16T00:44:20.064733567Z" level=info msg="RemovePodSandbox \"e544556a8947e1642a9b03685a291ff2fb8fd49a5105d65a6679d1fbd6fe8bad\" returns successfully" May 16 00:44:20.065300 env[1326]: time="2025-05-16T00:44:20.065266523Z" level=info msg="StopPodSandbox for \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\"" May 16 00:44:20.133311 env[1326]: 2025-05-16 00:44:20.095 [WARNING][5158] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0", GenerateName:"calico-apiserver-7d5c695cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d68fd98-e1d9-442d-9586-2d60cebfa71e", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5c695cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa", Pod:"calico-apiserver-7d5c695cc5-mf7nf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib48aa81fc66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:20.133311 env[1326]: 2025-05-16 00:44:20.096 [INFO][5158] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" May 16 00:44:20.133311 env[1326]: 2025-05-16 00:44:20.096 [INFO][5158] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" iface="eth0" netns="" May 16 00:44:20.133311 env[1326]: 2025-05-16 00:44:20.096 [INFO][5158] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" May 16 00:44:20.133311 env[1326]: 2025-05-16 00:44:20.096 [INFO][5158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" May 16 00:44:20.133311 env[1326]: 2025-05-16 00:44:20.119 [INFO][5167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" HandleID="k8s-pod-network.34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" May 16 00:44:20.133311 env[1326]: 2025-05-16 00:44:20.119 [INFO][5167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:20.133311 env[1326]: 2025-05-16 00:44:20.119 [INFO][5167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:20.133311 env[1326]: 2025-05-16 00:44:20.128 [WARNING][5167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" HandleID="k8s-pod-network.34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" May 16 00:44:20.133311 env[1326]: 2025-05-16 00:44:20.128 [INFO][5167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" HandleID="k8s-pod-network.34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" May 16 00:44:20.133311 env[1326]: 2025-05-16 00:44:20.129 [INFO][5167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:20.133311 env[1326]: 2025-05-16 00:44:20.131 [INFO][5158] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" May 16 00:44:20.133773 env[1326]: time="2025-05-16T00:44:20.133340678Z" level=info msg="TearDown network for sandbox \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\" successfully" May 16 00:44:20.133773 env[1326]: time="2025-05-16T00:44:20.133371157Z" level=info msg="StopPodSandbox for \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\" returns successfully" May 16 00:44:20.133990 env[1326]: time="2025-05-16T00:44:20.133952073Z" level=info msg="RemovePodSandbox for \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\"" May 16 00:44:20.134055 env[1326]: time="2025-05-16T00:44:20.134001033Z" level=info msg="Forcibly stopping sandbox \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\"" May 16 00:44:20.200321 env[1326]: 2025-05-16 00:44:20.166 [WARNING][5186] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0", GenerateName:"calico-apiserver-7d5c695cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d68fd98-e1d9-442d-9586-2d60cebfa71e", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5c695cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a36c62837f84995a13a2affd2d8bbf644b199238496f6f9d319dd937d471aefa", Pod:"calico-apiserver-7d5c695cc5-mf7nf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib48aa81fc66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:20.200321 env[1326]: 2025-05-16 00:44:20.167 [INFO][5186] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" May 16 00:44:20.200321 env[1326]: 2025-05-16 00:44:20.167 [INFO][5186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" iface="eth0" netns="" May 16 00:44:20.200321 env[1326]: 2025-05-16 00:44:20.167 [INFO][5186] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" May 16 00:44:20.200321 env[1326]: 2025-05-16 00:44:20.167 [INFO][5186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" May 16 00:44:20.200321 env[1326]: 2025-05-16 00:44:20.186 [INFO][5195] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" HandleID="k8s-pod-network.34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" May 16 00:44:20.200321 env[1326]: 2025-05-16 00:44:20.186 [INFO][5195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:20.200321 env[1326]: 2025-05-16 00:44:20.186 [INFO][5195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:20.200321 env[1326]: 2025-05-16 00:44:20.195 [WARNING][5195] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" HandleID="k8s-pod-network.34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" May 16 00:44:20.200321 env[1326]: 2025-05-16 00:44:20.195 [INFO][5195] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" HandleID="k8s-pod-network.34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--mf7nf-eth0" May 16 00:44:20.200321 env[1326]: 2025-05-16 00:44:20.196 [INFO][5195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:20.200321 env[1326]: 2025-05-16 00:44:20.198 [INFO][5186] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5" May 16 00:44:20.200840 env[1326]: time="2025-05-16T00:44:20.200804876Z" level=info msg="TearDown network for sandbox \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\" successfully" May 16 00:44:20.203698 env[1326]: time="2025-05-16T00:44:20.203664576Z" level=info msg="RemovePodSandbox \"34c113adae72391f05ec410f7667bc72e50f521b486504e99abc9355d0507bd5\" returns successfully" May 16 00:44:20.204335 env[1326]: time="2025-05-16T00:44:20.204308731Z" level=info msg="StopPodSandbox for \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\"" May 16 00:44:20.267177 env[1326]: 2025-05-16 00:44:20.236 [WARNING][5213] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0", GenerateName:"calico-kube-controllers-5648d477c5-", Namespace:"calico-system", SelfLink:"", UID:"b53c1672-73dc-401f-b6e2-787097ef7c61", ResourceVersion:"1251", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5648d477c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a", Pod:"calico-kube-controllers-5648d477c5-hhnfn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81a92ec0250", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:20.267177 env[1326]: 2025-05-16 00:44:20.236 [INFO][5213] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" May 16 00:44:20.267177 env[1326]: 2025-05-16 00:44:20.236 [INFO][5213] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" iface="eth0" netns="" May 16 00:44:20.267177 env[1326]: 2025-05-16 00:44:20.236 [INFO][5213] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" May 16 00:44:20.267177 env[1326]: 2025-05-16 00:44:20.236 [INFO][5213] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" May 16 00:44:20.267177 env[1326]: 2025-05-16 00:44:20.253 [INFO][5221] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" HandleID="k8s-pod-network.0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" Workload="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" May 16 00:44:20.267177 env[1326]: 2025-05-16 00:44:20.253 [INFO][5221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:20.267177 env[1326]: 2025-05-16 00:44:20.253 [INFO][5221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:20.267177 env[1326]: 2025-05-16 00:44:20.262 [WARNING][5221] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" HandleID="k8s-pod-network.0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" Workload="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" May 16 00:44:20.267177 env[1326]: 2025-05-16 00:44:20.262 [INFO][5221] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" HandleID="k8s-pod-network.0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" Workload="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" May 16 00:44:20.267177 env[1326]: 2025-05-16 00:44:20.263 [INFO][5221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:20.267177 env[1326]: 2025-05-16 00:44:20.265 [INFO][5213] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" May 16 00:44:20.267668 env[1326]: time="2025-05-16T00:44:20.267209083Z" level=info msg="TearDown network for sandbox \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\" successfully" May 16 00:44:20.267668 env[1326]: time="2025-05-16T00:44:20.267239882Z" level=info msg="StopPodSandbox for \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\" returns successfully" May 16 00:44:20.267916 env[1326]: time="2025-05-16T00:44:20.267884478Z" level=info msg="RemovePodSandbox for \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\"" May 16 00:44:20.268054 env[1326]: time="2025-05-16T00:44:20.268015237Z" level=info msg="Forcibly stopping sandbox \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\"" May 16 00:44:20.332336 env[1326]: 2025-05-16 00:44:20.300 [WARNING][5238] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0", GenerateName:"calico-kube-controllers-5648d477c5-", Namespace:"calico-system", SelfLink:"", UID:"b53c1672-73dc-401f-b6e2-787097ef7c61", ResourceVersion:"1251", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5648d477c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"679c0faf86656f41f50110cab98c871be91f2406ffa41221cd54ba007046ac7a", Pod:"calico-kube-controllers-5648d477c5-hhnfn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81a92ec0250", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:20.332336 env[1326]: 2025-05-16 00:44:20.300 [INFO][5238] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" May 16 00:44:20.332336 env[1326]: 2025-05-16 00:44:20.300 [INFO][5238] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" iface="eth0" netns="" May 16 00:44:20.332336 env[1326]: 2025-05-16 00:44:20.300 [INFO][5238] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" May 16 00:44:20.332336 env[1326]: 2025-05-16 00:44:20.300 [INFO][5238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" May 16 00:44:20.332336 env[1326]: 2025-05-16 00:44:20.318 [INFO][5246] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" HandleID="k8s-pod-network.0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" Workload="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" May 16 00:44:20.332336 env[1326]: 2025-05-16 00:44:20.318 [INFO][5246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:20.332336 env[1326]: 2025-05-16 00:44:20.319 [INFO][5246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:20.332336 env[1326]: 2025-05-16 00:44:20.327 [WARNING][5246] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" HandleID="k8s-pod-network.0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" Workload="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" May 16 00:44:20.332336 env[1326]: 2025-05-16 00:44:20.327 [INFO][5246] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" HandleID="k8s-pod-network.0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" Workload="localhost-k8s-calico--kube--controllers--5648d477c5--hhnfn-eth0" May 16 00:44:20.332336 env[1326]: 2025-05-16 00:44:20.328 [INFO][5246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:20.332336 env[1326]: 2025-05-16 00:44:20.330 [INFO][5238] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae" May 16 00:44:20.332336 env[1326]: time="2025-05-16T00:44:20.332309898Z" level=info msg="TearDown network for sandbox \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\" successfully" May 16 00:44:20.336305 env[1326]: time="2025-05-16T00:44:20.336269470Z" level=info msg="RemovePodSandbox \"0115ffb95663857d681e5eeacd9b3eef124cc8210edc2c5d5e89416bc1070fae\" returns successfully" May 16 00:44:20.336737 env[1326]: time="2025-05-16T00:44:20.336707907Z" level=info msg="StopPodSandbox for \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\"" May 16 00:44:20.406889 env[1326]: 2025-05-16 00:44:20.370 [WARNING][5264] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5", Pod:"goldmane-8f77d7b6c-kxr6q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia84cab58e0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:20.406889 env[1326]: 2025-05-16 00:44:20.370 [INFO][5264] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" May 16 00:44:20.406889 env[1326]: 2025-05-16 00:44:20.370 [INFO][5264] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" iface="eth0" netns="" May 16 00:44:20.406889 env[1326]: 2025-05-16 00:44:20.370 [INFO][5264] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" May 16 00:44:20.406889 env[1326]: 2025-05-16 00:44:20.370 [INFO][5264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" May 16 00:44:20.406889 env[1326]: 2025-05-16 00:44:20.392 [INFO][5272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" HandleID="k8s-pod-network.14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" Workload="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" May 16 00:44:20.406889 env[1326]: 2025-05-16 00:44:20.392 [INFO][5272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:20.406889 env[1326]: 2025-05-16 00:44:20.392 [INFO][5272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:20.406889 env[1326]: 2025-05-16 00:44:20.401 [WARNING][5272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" HandleID="k8s-pod-network.14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" Workload="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" May 16 00:44:20.406889 env[1326]: 2025-05-16 00:44:20.401 [INFO][5272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" HandleID="k8s-pod-network.14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" Workload="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" May 16 00:44:20.406889 env[1326]: 2025-05-16 00:44:20.403 [INFO][5272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:20.406889 env[1326]: 2025-05-16 00:44:20.405 [INFO][5264] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" May 16 00:44:20.407353 env[1326]: time="2025-05-16T00:44:20.406915486Z" level=info msg="TearDown network for sandbox \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\" successfully" May 16 00:44:20.407353 env[1326]: time="2025-05-16T00:44:20.406946846Z" level=info msg="StopPodSandbox for \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\" returns successfully" May 16 00:44:20.407405 env[1326]: time="2025-05-16T00:44:20.407382243Z" level=info msg="RemovePodSandbox for \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\"" May 16 00:44:20.407454 env[1326]: time="2025-05-16T00:44:20.407414882Z" level=info msg="Forcibly stopping sandbox \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\"" May 16 00:44:20.473238 env[1326]: 2025-05-16 00:44:20.440 [WARNING][5290] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6839961df887ee540a2d2eb8c79af7789580089db9b79c2fd0c4e17de5c54be5", Pod:"goldmane-8f77d7b6c-kxr6q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia84cab58e0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:20.473238 env[1326]: 2025-05-16 00:44:20.440 [INFO][5290] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" May 16 00:44:20.473238 env[1326]: 2025-05-16 00:44:20.440 [INFO][5290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" iface="eth0" netns="" May 16 00:44:20.473238 env[1326]: 2025-05-16 00:44:20.440 [INFO][5290] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" May 16 00:44:20.473238 env[1326]: 2025-05-16 00:44:20.440 [INFO][5290] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" May 16 00:44:20.473238 env[1326]: 2025-05-16 00:44:20.460 [INFO][5298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" HandleID="k8s-pod-network.14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" Workload="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" May 16 00:44:20.473238 env[1326]: 2025-05-16 00:44:20.460 [INFO][5298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:20.473238 env[1326]: 2025-05-16 00:44:20.460 [INFO][5298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:20.473238 env[1326]: 2025-05-16 00:44:20.468 [WARNING][5298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" HandleID="k8s-pod-network.14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" Workload="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" May 16 00:44:20.473238 env[1326]: 2025-05-16 00:44:20.468 [INFO][5298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" HandleID="k8s-pod-network.14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" Workload="localhost-k8s-goldmane--8f77d7b6c--kxr6q-eth0" May 16 00:44:20.473238 env[1326]: 2025-05-16 00:44:20.469 [INFO][5298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:20.473238 env[1326]: 2025-05-16 00:44:20.471 [INFO][5290] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618" May 16 00:44:20.473668 env[1326]: time="2025-05-16T00:44:20.473266892Z" level=info msg="TearDown network for sandbox \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\" successfully" May 16 00:44:20.476112 env[1326]: time="2025-05-16T00:44:20.476075872Z" level=info msg="RemovePodSandbox \"14becad6380482844d3e3a91da2c0b1548214bb72d37961c705f71c50c17b618\" returns successfully" May 16 00:44:20.476636 env[1326]: time="2025-05-16T00:44:20.476608909Z" level=info msg="StopPodSandbox for \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\"" May 16 00:44:20.486993 systemd[1]: Started sshd@13-10.0.0.81:22-10.0.0.1:38684.service. May 16 00:44:20.490945 kernel: kauditd_printk_skb: 1 callbacks suppressed May 16 00:44:20.491067 kernel: audit: type=1130 audit(1747356260.487:475): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.81:22-10.0.0.1:38684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:20.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.81:22-10.0.0.1:38684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:20.538000 audit[5321]: USER_ACCT pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:20.542167 sshd[5321]: Accepted publickey for core from 10.0.0.1 port 38684 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:20.540954 sshd[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:20.539000 audit[5321]: CRED_ACQ pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:20.545513 kernel: audit: type=1101 audit(1747356260.538:476): pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:20.545589 kernel: audit: type=1103 audit(1747356260.539:477): pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:20.545625 kernel: audit: type=1006 audit(1747356260.539:478): pid=5321 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 May 16 00:44:20.546812 kernel: audit: type=1300 audit(1747356260.539:478): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc1c57f0 a2=3 a3=1 items=0 ppid=1 pid=5321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:20.539000 audit[5321]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc1c57f0 a2=3 a3=1 items=0 ppid=1 pid=5321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:20.539000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 16 00:44:20.549992 kernel: audit: type=1327 audit(1747356260.539:478): proctitle=737368643A20636F7265205B707269765D May 16 00:44:20.555007 systemd-logind[1310]: New session 14 of user core. May 16 00:44:20.555573 systemd[1]: Started session-14.scope. May 16 00:44:20.558000 audit[5321]: USER_START pid=5321 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:20.559000 audit[5336]: CRED_ACQ pid=5336 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:20.564443 kernel: audit: type=1105 audit(1747356260.558:479): pid=5321 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:20.564527 kernel: audit: type=1103 audit(1747356260.559:480): pid=5336 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:20.566120 env[1326]: 2025-05-16 00:44:20.518 [WARNING][5316] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d2kkd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00bfb5c0-df56-4053-a2df-e7346d66a58a", ResourceVersion:"1233", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace", Pod:"csi-node-driver-d2kkd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib99d2e79800", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:20.566120 env[1326]: 2025-05-16 00:44:20.518 [INFO][5316] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" May 16 00:44:20.566120 env[1326]: 2025-05-16 00:44:20.518 [INFO][5316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" iface="eth0" netns="" May 16 00:44:20.566120 env[1326]: 2025-05-16 00:44:20.518 [INFO][5316] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" May 16 00:44:20.566120 env[1326]: 2025-05-16 00:44:20.518 [INFO][5316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" May 16 00:44:20.566120 env[1326]: 2025-05-16 00:44:20.536 [INFO][5327] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" HandleID="k8s-pod-network.6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" Workload="localhost-k8s-csi--node--driver--d2kkd-eth0" May 16 00:44:20.566120 env[1326]: 2025-05-16 00:44:20.536 [INFO][5327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:20.566120 env[1326]: 2025-05-16 00:44:20.536 [INFO][5327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:20.566120 env[1326]: 2025-05-16 00:44:20.552 [WARNING][5327] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" HandleID="k8s-pod-network.6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" Workload="localhost-k8s-csi--node--driver--d2kkd-eth0" May 16 00:44:20.566120 env[1326]: 2025-05-16 00:44:20.552 [INFO][5327] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" HandleID="k8s-pod-network.6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" Workload="localhost-k8s-csi--node--driver--d2kkd-eth0" May 16 00:44:20.566120 env[1326]: 2025-05-16 00:44:20.554 [INFO][5327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:20.566120 env[1326]: 2025-05-16 00:44:20.564 [INFO][5316] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" May 16 00:44:20.566496 env[1326]: time="2025-05-16T00:44:20.566150990Z" level=info msg="TearDown network for sandbox \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\" successfully" May 16 00:44:20.566496 env[1326]: time="2025-05-16T00:44:20.566185590Z" level=info msg="StopPodSandbox for \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\" returns successfully" May 16 00:44:20.566719 env[1326]: time="2025-05-16T00:44:20.566687466Z" level=info msg="RemovePodSandbox for \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\"" May 16 00:44:20.566842 env[1326]: time="2025-05-16T00:44:20.566805865Z" level=info msg="Forcibly stopping sandbox \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\"" May 16 00:44:20.640468 env[1326]: 2025-05-16 00:44:20.597 [WARNING][5347] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d2kkd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00bfb5c0-df56-4053-a2df-e7346d66a58a", ResourceVersion:"1233", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"894366fb6e15cec24c2c6323a0d9a94880bb30d5321a0f2bd8ca46b16e931ace", Pod:"csi-node-driver-d2kkd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib99d2e79800", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:20.640468 env[1326]: 2025-05-16 00:44:20.597 [INFO][5347] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" May 16 00:44:20.640468 env[1326]: 2025-05-16 00:44:20.597 [INFO][5347] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" iface="eth0" netns="" May 16 00:44:20.640468 env[1326]: 2025-05-16 00:44:20.597 [INFO][5347] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" May 16 00:44:20.640468 env[1326]: 2025-05-16 00:44:20.597 [INFO][5347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" May 16 00:44:20.640468 env[1326]: 2025-05-16 00:44:20.624 [INFO][5356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" HandleID="k8s-pod-network.6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" Workload="localhost-k8s-csi--node--driver--d2kkd-eth0" May 16 00:44:20.640468 env[1326]: 2025-05-16 00:44:20.624 [INFO][5356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:20.640468 env[1326]: 2025-05-16 00:44:20.624 [INFO][5356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:20.640468 env[1326]: 2025-05-16 00:44:20.632 [WARNING][5356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" HandleID="k8s-pod-network.6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" Workload="localhost-k8s-csi--node--driver--d2kkd-eth0" May 16 00:44:20.640468 env[1326]: 2025-05-16 00:44:20.632 [INFO][5356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" HandleID="k8s-pod-network.6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" Workload="localhost-k8s-csi--node--driver--d2kkd-eth0" May 16 00:44:20.640468 env[1326]: 2025-05-16 00:44:20.633 [INFO][5356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:20.640468 env[1326]: 2025-05-16 00:44:20.637 [INFO][5347] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34" May 16 00:44:20.641131 env[1326]: time="2025-05-16T00:44:20.641082295Z" level=info msg="TearDown network for sandbox \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\" successfully" May 16 00:44:20.647796 env[1326]: time="2025-05-16T00:44:20.647758248Z" level=info msg="RemovePodSandbox \"6af41866df6c347d7925f8cb6ab7d29d1aa222b3d5e8214a8517d495f25cae34\" returns successfully" May 16 00:44:20.648431 env[1326]: time="2025-05-16T00:44:20.648408403Z" level=info msg="StopPodSandbox for \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\"" May 16 00:44:20.720212 env[1326]: 2025-05-16 00:44:20.683 [WARNING][5382] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" WorkloadEndpoint="localhost-k8s-whisker--56577bbdf5--bx2tp-eth0" May 16 00:44:20.720212 env[1326]: 2025-05-16 00:44:20.683 [INFO][5382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" May 16 00:44:20.720212 env[1326]: 2025-05-16 00:44:20.683 [INFO][5382] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" iface="eth0" netns="" May 16 00:44:20.720212 env[1326]: 2025-05-16 00:44:20.683 [INFO][5382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" May 16 00:44:20.720212 env[1326]: 2025-05-16 00:44:20.683 [INFO][5382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" May 16 00:44:20.720212 env[1326]: 2025-05-16 00:44:20.706 [INFO][5391] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" HandleID="k8s-pod-network.41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" Workload="localhost-k8s-whisker--56577bbdf5--bx2tp-eth0" May 16 00:44:20.720212 env[1326]: 2025-05-16 00:44:20.706 [INFO][5391] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:20.720212 env[1326]: 2025-05-16 00:44:20.706 [INFO][5391] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:20.720212 env[1326]: 2025-05-16 00:44:20.714 [WARNING][5391] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" HandleID="k8s-pod-network.41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" Workload="localhost-k8s-whisker--56577bbdf5--bx2tp-eth0" May 16 00:44:20.720212 env[1326]: 2025-05-16 00:44:20.714 [INFO][5391] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" HandleID="k8s-pod-network.41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" Workload="localhost-k8s-whisker--56577bbdf5--bx2tp-eth0" May 16 00:44:20.720212 env[1326]: 2025-05-16 00:44:20.716 [INFO][5391] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:20.720212 env[1326]: 2025-05-16 00:44:20.718 [INFO][5382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" May 16 00:44:20.720689 env[1326]: time="2025-05-16T00:44:20.720653608Z" level=info msg="TearDown network for sandbox \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\" successfully" May 16 00:44:20.720754 env[1326]: time="2025-05-16T00:44:20.720739047Z" level=info msg="StopPodSandbox for \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\" returns successfully" May 16 00:44:20.721701 env[1326]: time="2025-05-16T00:44:20.721671360Z" level=info msg="RemovePodSandbox for \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\"" May 16 00:44:20.721788 env[1326]: time="2025-05-16T00:44:20.721709320Z" level=info msg="Forcibly stopping sandbox \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\"" May 16 00:44:20.752257 sshd[5321]: pam_unix(sshd:session): session closed for user core May 16 00:44:20.754680 systemd[1]: Started sshd@14-10.0.0.81:22-10.0.0.1:38694.service. May 16 00:44:20.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.81:22-10.0.0.1:38694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:20.758028 kernel: audit: type=1130 audit(1747356260.753:481): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.81:22-10.0.0.1:38694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:20.758000 audit[5321]: USER_END pid=5321 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:20.761737 systemd[1]: sshd@13-10.0.0.81:22-10.0.0.1:38684.service: Deactivated successfully. May 16 00:44:20.758000 audit[5321]: CRED_DISP pid=5321 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:20.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.81:22-10.0.0.1:38684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:20.762603 systemd[1]: session-14.scope: Deactivated successfully. May 16 00:44:20.762983 kernel: audit: type=1106 audit(1747356260.758:482): pid=5321 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:20.763234 systemd-logind[1310]: Session 14 logged out. Waiting for processes to exit. May 16 00:44:20.764142 systemd-logind[1310]: Removed session 14. May 16 00:44:20.798807 env[1326]: 2025-05-16 00:44:20.764 [WARNING][5409] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" WorkloadEndpoint="localhost-k8s-whisker--56577bbdf5--bx2tp-eth0" May 16 00:44:20.798807 env[1326]: 2025-05-16 00:44:20.764 [INFO][5409] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" May 16 00:44:20.798807 env[1326]: 2025-05-16 00:44:20.764 [INFO][5409] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" iface="eth0" netns="" May 16 00:44:20.798807 env[1326]: 2025-05-16 00:44:20.764 [INFO][5409] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" May 16 00:44:20.798807 env[1326]: 2025-05-16 00:44:20.764 [INFO][5409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" May 16 00:44:20.798807 env[1326]: 2025-05-16 00:44:20.784 [INFO][5420] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" HandleID="k8s-pod-network.41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" Workload="localhost-k8s-whisker--56577bbdf5--bx2tp-eth0" May 16 00:44:20.798807 env[1326]: 2025-05-16 00:44:20.784 [INFO][5420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:20.798807 env[1326]: 2025-05-16 00:44:20.784 [INFO][5420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:20.798807 env[1326]: 2025-05-16 00:44:20.793 [WARNING][5420] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" HandleID="k8s-pod-network.41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" Workload="localhost-k8s-whisker--56577bbdf5--bx2tp-eth0" May 16 00:44:20.798807 env[1326]: 2025-05-16 00:44:20.793 [INFO][5420] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" HandleID="k8s-pod-network.41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" Workload="localhost-k8s-whisker--56577bbdf5--bx2tp-eth0" May 16 00:44:20.798807 env[1326]: 2025-05-16 00:44:20.794 [INFO][5420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:20.798807 env[1326]: 2025-05-16 00:44:20.797 [INFO][5409] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5" May 16 00:44:20.799429 env[1326]: time="2025-05-16T00:44:20.798840810Z" level=info msg="TearDown network for sandbox \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\" successfully" May 16 00:44:20.802736 env[1326]: time="2025-05-16T00:44:20.802678062Z" level=info msg="RemovePodSandbox \"41500cfa167c2d3acd4c20a2e2d954fa2676aa41c1753a0cd78ac5bf7e2308d5\" returns successfully" May 16 00:44:20.803235 env[1326]: time="2025-05-16T00:44:20.803205059Z" level=info msg="StopPodSandbox for \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\"" May 16 00:44:20.802000 audit[5416]: USER_ACCT pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:20.803412 sshd[5416]: Accepted publickey for core from 10.0.0.1 port 38694 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:20.803000 audit[5416]: CRED_ACQ pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:20.803000 audit[5416]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd2610640 a2=3 a3=1 items=0 ppid=1 pid=5416 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:20.803000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 16 00:44:20.805261 sshd[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:20.809684 systemd[1]: Started session-15.scope. May 16 00:44:20.810111 systemd-logind[1310]: New session 15 of user core. May 16 00:44:20.813000 audit[5416]: USER_START pid=5416 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:20.814000 audit[5440]: CRED_ACQ pid=5440 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:20.877780 env[1326]: 2025-05-16 00:44:20.845 [WARNING][5439] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0", GenerateName:"calico-apiserver-7d5c695cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"78479183-5b0e-4e14-9b65-379d830097f9", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5c695cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d", Pod:"calico-apiserver-7d5c695cc5-gsjpm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f300b9a84d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:20.877780 env[1326]: 2025-05-16 00:44:20.845 [INFO][5439] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" May 16 00:44:20.877780 env[1326]: 2025-05-16 00:44:20.845 [INFO][5439] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" iface="eth0" netns="" May 16 00:44:20.877780 env[1326]: 2025-05-16 00:44:20.845 [INFO][5439] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" May 16 00:44:20.877780 env[1326]: 2025-05-16 00:44:20.845 [INFO][5439] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" May 16 00:44:20.877780 env[1326]: 2025-05-16 00:44:20.862 [INFO][5449] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" HandleID="k8s-pod-network.0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" May 16 00:44:20.877780 env[1326]: 2025-05-16 00:44:20.862 [INFO][5449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:20.877780 env[1326]: 2025-05-16 00:44:20.862 [INFO][5449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:20.877780 env[1326]: 2025-05-16 00:44:20.872 [WARNING][5449] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" HandleID="k8s-pod-network.0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" May 16 00:44:20.877780 env[1326]: 2025-05-16 00:44:20.872 [INFO][5449] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" HandleID="k8s-pod-network.0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" May 16 00:44:20.877780 env[1326]: 2025-05-16 00:44:20.874 [INFO][5449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:20.877780 env[1326]: 2025-05-16 00:44:20.876 [INFO][5439] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" May 16 00:44:20.878278 env[1326]: time="2025-05-16T00:44:20.877826126Z" level=info msg="TearDown network for sandbox \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\" successfully" May 16 00:44:20.878278 env[1326]: time="2025-05-16T00:44:20.877858326Z" level=info msg="StopPodSandbox for \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\" returns successfully" May 16 00:44:20.878532 env[1326]: time="2025-05-16T00:44:20.878499641Z" level=info msg="RemovePodSandbox for \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\"" May 16 00:44:20.878662 env[1326]: time="2025-05-16T00:44:20.878622481Z" level=info msg="Forcibly stopping sandbox \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\"" May 16 00:44:20.956737 env[1326]: 2025-05-16 00:44:20.914 [WARNING][5472] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0", GenerateName:"calico-apiserver-7d5c695cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"78479183-5b0e-4e14-9b65-379d830097f9", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 43, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5c695cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8520b302d82be0faa333306762ceb95ec5424d9e52f4635097d0869bccf1e5d", Pod:"calico-apiserver-7d5c695cc5-gsjpm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f300b9a84d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:44:20.956737 env[1326]: 2025-05-16 00:44:20.915 [INFO][5472] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" May 16 00:44:20.956737 env[1326]: 2025-05-16 00:44:20.915 [INFO][5472] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" iface="eth0" netns="" May 16 00:44:20.956737 env[1326]: 2025-05-16 00:44:20.915 [INFO][5472] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" May 16 00:44:20.956737 env[1326]: 2025-05-16 00:44:20.915 [INFO][5472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" May 16 00:44:20.956737 env[1326]: 2025-05-16 00:44:20.936 [INFO][5481] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" HandleID="k8s-pod-network.0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" May 16 00:44:20.956737 env[1326]: 2025-05-16 00:44:20.936 [INFO][5481] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:44:20.956737 env[1326]: 2025-05-16 00:44:20.936 [INFO][5481] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:44:20.956737 env[1326]: 2025-05-16 00:44:20.946 [WARNING][5481] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" HandleID="k8s-pod-network.0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" May 16 00:44:20.956737 env[1326]: 2025-05-16 00:44:20.946 [INFO][5481] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" HandleID="k8s-pod-network.0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" Workload="localhost-k8s-calico--apiserver--7d5c695cc5--gsjpm-eth0" May 16 00:44:20.956737 env[1326]: 2025-05-16 00:44:20.948 [INFO][5481] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:44:20.956737 env[1326]: 2025-05-16 00:44:20.954 [INFO][5472] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124" May 16 00:44:20.957438 env[1326]: time="2025-05-16T00:44:20.957396719Z" level=info msg="TearDown network for sandbox \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\" successfully" May 16 00:44:20.964118 env[1326]: time="2025-05-16T00:44:20.964065151Z" level=info msg="RemovePodSandbox \"0a8e0ee2221b945799d4da8766a6f56700e15af14b28a034d3f815e25ac84124\" returns successfully" May 16 00:44:21.086405 sshd[5416]: pam_unix(sshd:session): session closed for user core May 16 00:44:21.088247 systemd[1]: Started sshd@15-10.0.0.81:22-10.0.0.1:38702.service. May 16 00:44:21.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.81:22-10.0.0.1:38702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:21.088000 audit[5416]: USER_END pid=5416 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:21.088000 audit[5416]: CRED_DISP pid=5416 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:21.091940 systemd[1]: sshd@14-10.0.0.81:22-10.0.0.1:38694.service: Deactivated successfully. May 16 00:44:21.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.81:22-10.0.0.1:38694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:21.092957 systemd-logind[1310]: Session 15 logged out. Waiting for processes to exit. May 16 00:44:21.093055 systemd[1]: session-15.scope: Deactivated successfully. May 16 00:44:21.093813 systemd-logind[1310]: Removed session 15. May 16 00:44:21.132000 audit[5489]: USER_ACCT pid=5489 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:21.133409 sshd[5489]: Accepted publickey for core from 10.0.0.1 port 38702 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:21.133000 audit[5489]: CRED_ACQ pid=5489 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:21.133000 audit[5489]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd9d63260 a2=3 a3=1 items=0 ppid=1 pid=5489 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:21.133000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 16 00:44:21.134816 sshd[5489]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:21.138851 systemd-logind[1310]: New session 16 of user core. May 16 00:44:21.140156 systemd[1]: Started session-16.scope. May 16 00:44:21.143000 audit[5489]: USER_START pid=5489 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:21.145000 audit[5494]: CRED_ACQ pid=5494 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:22.870477 sshd[5489]: pam_unix(sshd:session): session closed for user core May 16 00:44:22.871000 audit[5489]: USER_END pid=5489 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:22.871000 audit[5489]: CRED_DISP pid=5489 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:22.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.81:22-10.0.0.1:41020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:22.873124 systemd[1]: Started sshd@16-10.0.0.81:22-10.0.0.1:41020.service. May 16 00:44:22.873000 audit[5507]: NETFILTER_CFG table=filter:128 family=2 entries=12 op=nft_register_rule pid=5507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:22.873000 audit[5507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=fffff4212bc0 a2=0 a3=1 items=0 ppid=2257 pid=5507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:22.873000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:22.874908 systemd[1]: sshd@15-10.0.0.81:22-10.0.0.1:38702.service: Deactivated successfully. May 16 00:44:22.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.81:22-10.0.0.1:38702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:22.876338 systemd[1]: session-16.scope: Deactivated successfully. May 16 00:44:22.876853 systemd-logind[1310]: Session 16 logged out. Waiting for processes to exit. May 16 00:44:22.881917 systemd-logind[1310]: Removed session 16. May 16 00:44:22.883000 audit[5507]: NETFILTER_CFG table=nat:129 family=2 entries=22 op=nft_register_rule pid=5507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:22.883000 audit[5507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=fffff4212bc0 a2=0 a3=1 items=0 ppid=2257 pid=5507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:22.883000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:22.910000 audit[5513]: NETFILTER_CFG table=filter:130 family=2 entries=24 op=nft_register_rule pid=5513 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:22.910000 audit[5513]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13432 a0=3 a1=ffffc6323c70 a2=0 a3=1 items=0 ppid=2257 pid=5513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:22.910000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:22.919000 audit[5513]: NETFILTER_CFG table=nat:131 family=2 entries=22 op=nft_register_rule pid=5513 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:22.919000 audit[5513]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffc6323c70 a2=0 a3=1 items=0 ppid=2257 pid=5513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:22.919000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:22.927000 audit[5508]: USER_ACCT pid=5508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:22.929123 sshd[5508]: Accepted publickey for core from 10.0.0.1 port 41020 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:22.928000 audit[5508]: CRED_ACQ pid=5508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:22.928000 audit[5508]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff8b17850 a2=3 a3=1 items=0 ppid=1 pid=5508 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:22.928000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 16 00:44:22.930274 sshd[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:22.933812 systemd-logind[1310]: New session 17 of user core. May 16 00:44:22.934633 systemd[1]: Started session-17.scope. May 16 00:44:22.937000 audit[5508]: USER_START pid=5508 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:22.939000 audit[5515]: CRED_ACQ pid=5515 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:23.257943 sshd[5508]: pam_unix(sshd:session): session closed for user core May 16 00:44:23.260703 systemd[1]: Started sshd@17-10.0.0.81:22-10.0.0.1:41024.service. May 16 00:44:23.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.81:22-10.0.0.1:41024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:23.265000 audit[5508]: USER_END pid=5508 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:23.265000 audit[5508]: CRED_DISP pid=5508 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:23.268856 systemd[1]: sshd@16-10.0.0.81:22-10.0.0.1:41020.service: Deactivated successfully. May 16 00:44:23.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.81:22-10.0.0.1:41020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:23.269797 systemd[1]: session-17.scope: Deactivated successfully. May 16 00:44:23.271749 systemd-logind[1310]: Session 17 logged out. Waiting for processes to exit. May 16 00:44:23.273186 systemd-logind[1310]: Removed session 17. May 16 00:44:23.308077 sshd[5522]: Accepted publickey for core from 10.0.0.1 port 41024 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:23.307000 audit[5522]: USER_ACCT pid=5522 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:23.310024 sshd[5522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:23.308000 audit[5522]: CRED_ACQ pid=5522 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:23.308000 audit[5522]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0e21040 a2=3 a3=1 items=0 ppid=1 pid=5522 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:23.308000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 16 00:44:23.314470 systemd-logind[1310]: New session 18 of user core. May 16 00:44:23.315323 systemd[1]: Started session-18.scope. May 16 00:44:23.318000 audit[5522]: USER_START pid=5522 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:23.320000 audit[5527]: CRED_ACQ pid=5527 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:23.431524 sshd[5522]: pam_unix(sshd:session): session closed for user core May 16 00:44:23.431000 audit[5522]: USER_END pid=5522 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:23.431000 audit[5522]: CRED_DISP pid=5522 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:23.434202 systemd[1]: sshd@17-10.0.0.81:22-10.0.0.1:41024.service: Deactivated successfully. May 16 00:44:23.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.81:22-10.0.0.1:41024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:23.435220 systemd-logind[1310]: Session 18 logged out. Waiting for processes to exit. May 16 00:44:23.435280 systemd[1]: session-18.scope: Deactivated successfully. May 16 00:44:23.436227 systemd-logind[1310]: Removed session 18. May 16 00:44:23.769597 kubelet[2112]: E0516 00:44:23.769542 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-788588bcd7-c7kkr" podUID="083ed115-d2c3-4e81-b1aa-73fbcace47ab" May 16 00:44:27.064724 systemd[1]: run-containerd-runc-k8s.io-d5bb17f73075f8c9512bf12618a156e07fd87d464525736df8a96a4da4a1d56e-runc.gXWxoq.mount: Deactivated successfully. May 16 00:44:27.381506 kubelet[2112]: I0516 00:44:27.381466 2112 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:44:27.418000 audit[5567]: NETFILTER_CFG table=filter:132 family=2 entries=36 op=nft_register_rule pid=5567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:27.420152 kernel: kauditd_printk_skb: 57 callbacks suppressed May 16 00:44:27.420224 kernel: audit: type=1325 audit(1747356267.418:524): table=filter:132 family=2 entries=36 op=nft_register_rule pid=5567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:27.418000 audit[5567]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13432 a0=3 a1=fffff75ef260 a2=0 a3=1 items=0 ppid=2257 pid=5567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:27.424905 kernel: audit: type=1300 audit(1747356267.418:524): arch=c00000b7 syscall=211 success=yes exit=13432 a0=3 a1=fffff75ef260 a2=0 a3=1 items=0 ppid=2257 pid=5567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:27.418000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:27.426989 kernel: audit: type=1327 audit(1747356267.418:524): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:27.427000 audit[5567]: NETFILTER_CFG table=nat:133 family=2 entries=34 op=nft_register_chain pid=5567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:27.427000 audit[5567]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11236 a0=3 a1=fffff75ef260 a2=0 a3=1 items=0 ppid=2257 pid=5567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:27.433426 kernel: audit: type=1325 audit(1747356267.427:525): table=nat:133 family=2 entries=34 op=nft_register_chain pid=5567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:27.433490 kernel: audit: type=1300 audit(1747356267.427:525): arch=c00000b7 syscall=211 success=yes exit=11236 a0=3 a1=fffff75ef260 a2=0 a3=1 items=0 ppid=2257 pid=5567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:27.433512 kernel: audit: type=1327 audit(1747356267.427:525): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:27.427000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:27.768670 kubelet[2112]: E0516 00:44:27.768562 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-kxr6q" podUID="74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82" May 16 00:44:28.424000 audit[5569]: NETFILTER_CFG table=filter:134 family=2 entries=24 op=nft_register_rule pid=5569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:28.424000 audit[5569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe5bdd010 a2=0 a3=1 items=0 ppid=2257 pid=5569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:28.429863 kernel: audit: type=1325 audit(1747356268.424:526): table=filter:134 family=2 entries=24 op=nft_register_rule pid=5569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:28.429933 kernel: audit: type=1300 audit(1747356268.424:526): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe5bdd010 a2=0 a3=1 items=0 ppid=2257 pid=5569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:28.429972 kernel: audit: type=1327 audit(1747356268.424:526): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:28.424000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:28.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.81:22-10.0.0.1:41036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:28.434802 systemd[1]: Started sshd@18-10.0.0.81:22-10.0.0.1:41036.service. May 16 00:44:28.434000 audit[5569]: NETFILTER_CFG table=nat:135 family=2 entries=106 op=nft_register_chain pid=5569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 16 00:44:28.434000 audit[5569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffe5bdd010 a2=0 a3=1 items=0 ppid=2257 pid=5569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:28.434000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 16 00:44:28.438003 kernel: audit: type=1130 audit(1747356268.433:527): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.81:22-10.0.0.1:41036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:28.476000 audit[5570]: USER_ACCT pid=5570 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:28.477701 sshd[5570]: Accepted publickey for core from 10.0.0.1 port 41036 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:28.477000 audit[5570]: CRED_ACQ pid=5570 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:28.477000 audit[5570]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd92df210 a2=3 a3=1 items=0 ppid=1 pid=5570 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:28.477000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 16 00:44:28.479013 sshd[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:28.483010 systemd-logind[1310]: New session 19 of user core. May 16 00:44:28.483465 systemd[1]: Started session-19.scope. May 16 00:44:28.487000 audit[5570]: USER_START pid=5570 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:28.489000 audit[5574]: CRED_ACQ pid=5574 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:28.600078 sshd[5570]: pam_unix(sshd:session): session closed for user core May 16 00:44:28.599000 audit[5570]: USER_END pid=5570 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:28.600000 audit[5570]: CRED_DISP pid=5570 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:28.602799 systemd[1]: sshd@18-10.0.0.81:22-10.0.0.1:41036.service: Deactivated successfully. May 16 00:44:28.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.81:22-10.0.0.1:41036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:28.603767 systemd-logind[1310]: Session 19 logged out. Waiting for processes to exit. May 16 00:44:28.603838 systemd[1]: session-19.scope: Deactivated successfully. May 16 00:44:28.604753 systemd-logind[1310]: Removed session 19. May 16 00:44:33.603386 systemd[1]: Started sshd@19-10.0.0.81:22-10.0.0.1:45216.service. May 16 00:44:33.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.81:22-10.0.0.1:45216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:33.604417 kernel: kauditd_printk_skb: 13 callbacks suppressed May 16 00:44:33.604465 kernel: audit: type=1130 audit(1747356273.602:537): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.81:22-10.0.0.1:45216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:33.645000 audit[5587]: USER_ACCT pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:33.646891 sshd[5587]: Accepted publickey for core from 10.0.0.1 port 45216 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:33.648316 sshd[5587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:33.646000 audit[5587]: CRED_ACQ pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:33.651357 kernel: audit: type=1101 audit(1747356273.645:538): pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:33.651440 kernel: audit: type=1103 audit(1747356273.646:539): pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:33.651463 kernel: audit: type=1006 audit(1747356273.647:540): pid=5587 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 May 16 00:44:33.653074 kernel: audit: type=1300 audit(1747356273.647:540): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee19f330 a2=3 a3=1 items=0 ppid=1 pid=5587 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:33.647000 audit[5587]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee19f330 a2=3 a3=1 items=0 ppid=1 pid=5587 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:33.654945 systemd[1]: Started session-20.scope. May 16 00:44:33.656035 kernel: audit: type=1327 audit(1747356273.647:540): proctitle=737368643A20636F7265205B707269765D May 16 00:44:33.647000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 16 00:44:33.656194 systemd-logind[1310]: New session 20 of user core. May 16 00:44:33.660000 audit[5587]: USER_START pid=5587 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:33.661000 audit[5590]: CRED_ACQ pid=5590 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:33.666087 kernel: audit: type=1105 audit(1747356273.660:541): pid=5587 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:33.666127 kernel: audit: type=1103 audit(1747356273.661:542): pid=5590 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:33.802955 sshd[5587]: pam_unix(sshd:session): session closed for user core May 16 00:44:33.804000 audit[5587]: USER_END pid=5587 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:33.807293 systemd[1]: sshd@19-10.0.0.81:22-10.0.0.1:45216.service: Deactivated successfully. May 16 00:44:33.808160 systemd[1]: session-20.scope: Deactivated successfully. May 16 00:44:33.804000 audit[5587]: CRED_DISP pid=5587 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:33.810577 kernel: audit: type=1106 audit(1747356273.804:543): pid=5587 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:33.810716 kernel: audit: type=1104 audit(1747356273.804:544): pid=5587 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:33.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.81:22-10.0.0.1:45216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:33.811093 systemd-logind[1310]: Session 20 logged out. Waiting for processes to exit. May 16 00:44:33.811888 systemd-logind[1310]: Removed session 20. May 16 00:44:34.767450 kubelet[2112]: E0516 00:44:34.767409 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:36.767926 env[1326]: time="2025-05-16T00:44:36.767878313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 00:44:36.916056 env[1326]: time="2025-05-16T00:44:36.915926839Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 16 00:44:36.916833 env[1326]: time="2025-05-16T00:44:36.916801404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 16 00:44:36.917090 kubelet[2112]: E0516 00:44:36.917031 2112 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:44:36.917392 kubelet[2112]: E0516 00:44:36.917092 2112 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:44:36.917392 kubelet[2112]: E0516 00:44:36.917222 2112 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3c207aa3ffe84b5fbd162d746ecae6bd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fk5t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-788588bcd7-c7kkr_calico-system(083ed115-d2c3-4e81-b1aa-73fbcace47ab): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 16 00:44:36.919292 env[1326]: time="2025-05-16T00:44:36.919245180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 00:44:37.098680 env[1326]: time="2025-05-16T00:44:37.098533718Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 16 00:44:37.099591 env[1326]: time="2025-05-16T00:44:37.099554084Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 16 00:44:37.099809 kubelet[2112]: E0516 00:44:37.099768 2112 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:44:37.099866 kubelet[2112]: E0516 00:44:37.099821 2112 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:44:37.099998 kubelet[2112]: E0516 00:44:37.099943 2112 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk5t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-788588bcd7-c7kkr_calico-system(083ed115-d2c3-4e81-b1aa-73fbcace47ab): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 16 00:44:37.101163 kubelet[2112]: E0516 00:44:37.101116 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-788588bcd7-c7kkr" podUID="083ed115-d2c3-4e81-b1aa-73fbcace47ab" May 16 00:44:38.806369 systemd[1]: Started sshd@20-10.0.0.81:22-10.0.0.1:45220.service. May 16 00:44:38.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.81:22-10.0.0.1:45220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:38.807199 kernel: kauditd_printk_skb: 1 callbacks suppressed May 16 00:44:38.807257 kernel: audit: type=1130 audit(1747356278.805:546): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.81:22-10.0.0.1:45220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:38.849000 audit[5607]: USER_ACCT pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:38.851075 sshd[5607]: Accepted publickey for core from 10.0.0.1 port 45220 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:38.852639 sshd[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:38.851000 audit[5607]: CRED_ACQ pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:38.855487 kernel: audit: type=1101 audit(1747356278.849:547): pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:38.855583 kernel: audit: type=1103 audit(1747356278.851:548): pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:38.855611 kernel: audit: type=1006 audit(1747356278.851:549): pid=5607 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 May 16 00:44:38.851000 audit[5607]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffa260b80 a2=3 a3=1 items=0 ppid=1 pid=5607 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:38.857316 systemd-logind[1310]: New session 21 of user core. May 16 00:44:38.857981 systemd[1]: Started session-21.scope. May 16 00:44:38.859352 kernel: audit: type=1300 audit(1747356278.851:549): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffa260b80 a2=3 a3=1 items=0 ppid=1 pid=5607 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:38.851000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 16 00:44:38.860225 kernel: audit: type=1327 audit(1747356278.851:549): proctitle=737368643A20636F7265205B707269765D May 16 00:44:38.861000 audit[5607]: USER_START pid=5607 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:38.863000 audit[5610]: CRED_ACQ pid=5610 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:38.867605 kernel: audit: type=1105 audit(1747356278.861:550): pid=5607 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:38.867664 kernel: audit: type=1103 audit(1747356278.863:551): pid=5610 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:39.041518 sshd[5607]: pam_unix(sshd:session): session closed for user core May 16 00:44:39.041000 audit[5607]: USER_END pid=5607 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:39.044039 systemd[1]: sshd@20-10.0.0.81:22-10.0.0.1:45220.service: Deactivated successfully. May 16 00:44:39.045215 systemd[1]: session-21.scope: Deactivated successfully. May 16 00:44:39.041000 audit[5607]: CRED_DISP pid=5607 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:39.045727 systemd-logind[1310]: Session 21 logged out. Waiting for processes to exit. May 16 00:44:39.046514 systemd-logind[1310]: Removed session 21. May 16 00:44:39.047592 kernel: audit: type=1106 audit(1747356279.041:552): pid=5607 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:39.047673 kernel: audit: type=1104 audit(1747356279.041:553): pid=5607 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 16 00:44:39.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.81:22-10.0.0.1:45220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:39.770206 env[1326]: time="2025-05-16T00:44:39.770133208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 00:44:39.927771 env[1326]: time="2025-05-16T00:44:39.927693323Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 16 00:44:39.928711 env[1326]: time="2025-05-16T00:44:39.928666729Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 16 00:44:39.928922 kubelet[2112]: E0516 00:44:39.928862 2112 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:44:39.929224 kubelet[2112]: E0516 00:44:39.928927 2112 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:44:39.929224 kubelet[2112]: E0516 00:44:39.929091 2112 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gz7jz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-kxr6q_calico-system(74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 16 00:44:39.930292 kubelet[2112]: E0516 00:44:39.930263 2112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-kxr6q" podUID="74fcc6e1-5ef8-4b7f-811d-f8fe3a545f82"