Jul 14 21:55:59.711780 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 14 21:55:59.711799 kernel: Linux version 5.15.187-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Jul 14 20:49:56 -00 2025 Jul 14 21:55:59.711807 kernel: efi: EFI v2.70 by EDK II Jul 14 21:55:59.711812 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 14 21:55:59.711817 kernel: random: crng init done Jul 14 21:55:59.711823 kernel: ACPI: Early table checksum verification disabled Jul 14 21:55:59.711829 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 14 21:55:59.711835 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 14 21:55:59.711841 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:55:59.711846 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:55:59.711851 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:55:59.711856 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:55:59.711862 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:55:59.711867 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:55:59.711875 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:55:59.711880 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:55:59.711886 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:55:59.711892 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 14 21:55:59.711897 kernel: NUMA: Failed to initialise from firmware Jul 14 21:55:59.711903 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:55:59.711909 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Jul 14 21:55:59.711914 kernel: Zone ranges: Jul 14 21:55:59.711920 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:55:59.711926 kernel: DMA32 empty Jul 14 21:55:59.711932 kernel: Normal empty Jul 14 21:55:59.711937 kernel: Movable zone start for each node Jul 14 21:55:59.711943 kernel: Early memory node ranges Jul 14 21:55:59.711948 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 14 21:55:59.711954 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 14 21:55:59.711960 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 14 21:55:59.711965 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 14 21:55:59.711971 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 14 21:55:59.711976 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 14 21:55:59.711982 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 14 21:55:59.711988 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:55:59.711994 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 14 21:55:59.712000 kernel: psci: probing for conduit method from ACPI. Jul 14 21:55:59.712005 kernel: psci: PSCIv1.1 detected in firmware. Jul 14 21:55:59.712011 kernel: psci: Using standard PSCI v0.2 function IDs Jul 14 21:55:59.712024 kernel: psci: Trusted OS migration not required Jul 14 21:55:59.712034 kernel: psci: SMC Calling Convention v1.1 Jul 14 21:55:59.712040 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 14 21:55:59.712048 kernel: ACPI: SRAT not present Jul 14 21:55:59.712054 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 14 21:55:59.712064 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 14 21:55:59.712071 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 14 21:55:59.712077 kernel: Detected PIPT I-cache on CPU0 Jul 14 21:55:59.712083 kernel: CPU features: detected: GIC system register CPU interface Jul 14 21:55:59.712090 kernel: CPU features: detected: Hardware dirty bit management Jul 14 21:55:59.712096 kernel: CPU features: detected: Spectre-v4 Jul 14 21:55:59.712102 kernel: CPU features: detected: Spectre-BHB Jul 14 21:55:59.712109 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 14 21:55:59.712115 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 14 21:55:59.712121 kernel: CPU features: detected: ARM erratum 1418040 Jul 14 21:55:59.712127 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 14 21:55:59.712133 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 14 21:55:59.712139 kernel: Policy zone: DMA Jul 14 21:55:59.712146 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0fbac260ee8dcd4db6590eed44229ca41387b27ea0fa758fd2be410620d68236 Jul 14 21:55:59.712152 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 21:55:59.712158 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 21:55:59.712164 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 21:55:59.712170 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 21:55:59.712178 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Jul 14 21:55:59.712184 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 21:55:59.712190 kernel: trace event string verifier disabled Jul 14 21:55:59.712196 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 21:55:59.712202 kernel: rcu: RCU event tracing is enabled. Jul 14 21:55:59.712209 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 21:55:59.712216 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 21:55:59.712222 kernel: Tracing variant of Tasks RCU enabled. Jul 14 21:55:59.712228 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 21:55:59.712235 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 21:55:59.712240 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 14 21:55:59.712247 kernel: GICv3: 256 SPIs implemented Jul 14 21:55:59.712253 kernel: GICv3: 0 Extended SPIs implemented Jul 14 21:55:59.712259 kernel: GICv3: Distributor has no Range Selector support Jul 14 21:55:59.712265 kernel: Root IRQ handler: gic_handle_irq Jul 14 21:55:59.712271 kernel: GICv3: 16 PPIs implemented Jul 14 21:55:59.712277 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 14 21:55:59.712283 kernel: ACPI: SRAT not present Jul 14 21:55:59.712289 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 14 21:55:59.712295 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 14 21:55:59.712301 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 14 21:55:59.712307 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 14 21:55:59.712313 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 14 21:55:59.712320 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:55:59.712327 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 14 21:55:59.712333 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 14 21:55:59.712339 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 14 21:55:59.712345 kernel: arm-pv: using stolen time PV Jul 14 21:55:59.712351 kernel: Console: colour dummy device 80x25 Jul 14 21:55:59.712357 kernel: ACPI: Core revision 20210730 Jul 14 21:55:59.712364 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 14 21:55:59.712370 kernel: pid_max: default: 32768 minimum: 301 Jul 14 21:55:59.712376 kernel: LSM: Security Framework initializing Jul 14 21:55:59.712383 kernel: SELinux: Initializing. Jul 14 21:55:59.712390 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:55:59.712396 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:55:59.712402 kernel: rcu: Hierarchical SRCU implementation. Jul 14 21:55:59.712408 kernel: Platform MSI: ITS@0x8080000 domain created Jul 14 21:55:59.712414 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 14 21:55:59.712420 kernel: Remapping and enabling EFI services. Jul 14 21:55:59.712426 kernel: smp: Bringing up secondary CPUs ... Jul 14 21:55:59.712432 kernel: Detected PIPT I-cache on CPU1 Jul 14 21:55:59.712440 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 14 21:55:59.712446 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 14 21:55:59.712452 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:55:59.712458 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 14 21:55:59.712465 kernel: Detected PIPT I-cache on CPU2 Jul 14 21:55:59.712471 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 14 21:55:59.712477 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 14 21:55:59.712483 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:55:59.712489 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 14 21:55:59.712495 kernel: Detected PIPT I-cache on CPU3 Jul 14 21:55:59.712503 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 14 21:55:59.712509 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 14 21:55:59.712515 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:55:59.712521 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 14 21:55:59.712532 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 21:55:59.712539 kernel: SMP: Total of 4 processors activated. Jul 14 21:55:59.712545 kernel: CPU features: detected: 32-bit EL0 Support Jul 14 21:55:59.712552 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 14 21:55:59.712559 kernel: CPU features: detected: Common not Private translations Jul 14 21:55:59.712565 kernel: CPU features: detected: CRC32 instructions Jul 14 21:55:59.712571 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 14 21:55:59.712578 kernel: CPU features: detected: LSE atomic instructions Jul 14 21:55:59.712593 kernel: CPU features: detected: Privileged Access Never Jul 14 21:55:59.712599 kernel: CPU features: detected: RAS Extension Support Jul 14 21:55:59.712606 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 14 21:55:59.712612 kernel: CPU: All CPU(s) started at EL1 Jul 14 21:55:59.712619 kernel: alternatives: patching kernel code Jul 14 21:55:59.712627 kernel: devtmpfs: initialized Jul 14 21:55:59.712634 kernel: KASLR enabled Jul 14 21:55:59.712640 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 21:55:59.712647 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 21:55:59.712654 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 21:55:59.712660 kernel: SMBIOS 3.0.0 present. Jul 14 21:55:59.712667 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 14 21:55:59.712674 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 21:55:59.712680 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 14 21:55:59.712688 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 14 21:55:59.712695 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 14 21:55:59.712701 kernel: audit: initializing netlink subsys (disabled) Jul 14 21:55:59.712708 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Jul 14 21:55:59.712714 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 21:55:59.712721 kernel: cpuidle: using governor menu Jul 14 21:55:59.712727 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 14 21:55:59.712734 kernel: ASID allocator initialised with 32768 entries Jul 14 21:55:59.712740 kernel: ACPI: bus type PCI registered Jul 14 21:55:59.712748 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 21:55:59.712754 kernel: Serial: AMBA PL011 UART driver Jul 14 21:55:59.712761 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 21:55:59.712768 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 14 21:55:59.712784 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 21:55:59.712791 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 14 21:55:59.712798 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 21:55:59.712805 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 14 21:55:59.712811 kernel: ACPI: Added _OSI(Module Device) Jul 14 21:55:59.712820 kernel: ACPI: Added _OSI(Processor Device) Jul 14 21:55:59.712826 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 21:55:59.712833 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 14 21:55:59.712839 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 14 21:55:59.712846 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 14 21:55:59.712852 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 21:55:59.712859 kernel: ACPI: Interpreter enabled Jul 14 21:55:59.712866 kernel: ACPI: Using GIC for interrupt routing Jul 14 21:55:59.712872 kernel: ACPI: MCFG table detected, 1 entries Jul 14 21:55:59.712880 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 14 21:55:59.712887 kernel: printk: console [ttyAMA0] enabled Jul 14 21:55:59.712893 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 21:55:59.713007 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 21:55:59.713081 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 14 21:55:59.713138 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 14 21:55:59.713196 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 14 21:55:59.713255 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 14 21:55:59.713264 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 14 21:55:59.713271 kernel: PCI host bridge to bus 0000:00 Jul 14 21:55:59.713335 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 14 21:55:59.713388 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 14 21:55:59.713441 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 14 21:55:59.713491 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 21:55:59.713562 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 14 21:55:59.713648 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 21:55:59.713712 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 14 21:55:59.713773 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 14 21:55:59.713831 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:55:59.713889 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:55:59.713949 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 14 21:55:59.714009 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 14 21:55:59.714070 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 14 21:55:59.714123 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 14 21:55:59.714182 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 14 21:55:59.714198 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 14 21:55:59.714205 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 14 21:55:59.714212 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 14 21:55:59.714221 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 14 21:55:59.714227 kernel: iommu: Default domain type: Translated Jul 14 21:55:59.714234 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 14 21:55:59.714241 kernel: vgaarb: loaded Jul 14 21:55:59.714247 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 14 21:55:59.714254 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 14 21:55:59.714260 kernel: PTP clock support registered Jul 14 21:55:59.714267 kernel: Registered efivars operations Jul 14 21:55:59.714273 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 14 21:55:59.714284 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 21:55:59.714293 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 21:55:59.714300 kernel: pnp: PnP ACPI init Jul 14 21:55:59.714366 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 14 21:55:59.714375 kernel: pnp: PnP ACPI: found 1 devices Jul 14 21:55:59.714383 kernel: NET: Registered PF_INET protocol family Jul 14 21:55:59.714390 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 21:55:59.714397 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 21:55:59.714404 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 21:55:59.714412 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 21:55:59.714419 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 14 21:55:59.714425 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 21:55:59.714432 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:55:59.714439 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:55:59.714445 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 21:55:59.714452 kernel: PCI: CLS 0 bytes, default 64 Jul 14 21:55:59.714458 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 14 21:55:59.714465 kernel: kvm [1]: HYP mode not available Jul 14 21:55:59.714473 kernel: Initialise system trusted keyrings Jul 14 21:55:59.714479 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 21:55:59.714486 kernel: Key type asymmetric registered Jul 14 21:55:59.714493 kernel: Asymmetric key parser 'x509' registered Jul 14 21:55:59.714499 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 14 21:55:59.714505 kernel: io scheduler mq-deadline registered Jul 14 21:55:59.714512 kernel: io scheduler kyber registered Jul 14 21:55:59.714519 kernel: io scheduler bfq registered Jul 14 21:55:59.714525 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 14 21:55:59.714533 kernel: ACPI: button: Power Button [PWRB] Jul 14 21:55:59.714540 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 14 21:55:59.714619 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 14 21:55:59.714629 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 21:55:59.714635 kernel: thunder_xcv, ver 1.0 Jul 14 21:55:59.714642 kernel: thunder_bgx, ver 1.0 Jul 14 21:55:59.714648 kernel: nicpf, ver 1.0 Jul 14 21:55:59.714655 kernel: nicvf, ver 1.0 Jul 14 21:55:59.714723 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 14 21:55:59.714780 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-14T21:55:59 UTC (1752530159) Jul 14 21:55:59.714789 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 14 21:55:59.714795 kernel: NET: Registered PF_INET6 protocol family Jul 14 21:55:59.714802 kernel: Segment Routing with IPv6 Jul 14 21:55:59.714809 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 21:55:59.714815 kernel: NET: Registered PF_PACKET protocol family Jul 14 21:55:59.714822 kernel: Key type dns_resolver registered Jul 14 21:55:59.714828 kernel: registered taskstats version 1 Jul 14 21:55:59.714836 kernel: Loading compiled-in X.509 certificates Jul 14 21:55:59.714843 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.187-flatcar: 118351bb2b1409a8fe1c98db16ecff1bb5342a27' Jul 14 21:55:59.714849 kernel: Key type .fscrypt registered Jul 14 21:55:59.714855 kernel: Key type fscrypt-provisioning registered Jul 14 21:55:59.714862 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 21:55:59.714868 kernel: ima: Allocated hash algorithm: sha1 Jul 14 21:55:59.714875 kernel: ima: No architecture policies found Jul 14 21:55:59.714881 kernel: clk: Disabling unused clocks Jul 14 21:55:59.714888 kernel: Freeing unused kernel memory: 36416K Jul 14 21:55:59.714895 kernel: Run /init as init process Jul 14 21:55:59.714902 kernel: with arguments: Jul 14 21:55:59.714908 kernel: /init Jul 14 21:55:59.714915 kernel: with environment: Jul 14 21:55:59.714921 kernel: HOME=/ Jul 14 21:55:59.714927 kernel: TERM=linux Jul 14 21:55:59.714934 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 21:55:59.714942 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 14 21:55:59.714952 systemd[1]: Detected virtualization kvm. Jul 14 21:55:59.714959 systemd[1]: Detected architecture arm64. Jul 14 21:55:59.714966 systemd[1]: Running in initrd. Jul 14 21:55:59.714973 systemd[1]: No hostname configured, using default hostname. Jul 14 21:55:59.714980 systemd[1]: Hostname set to . Jul 14 21:55:59.714987 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:55:59.714994 systemd[1]: Queued start job for default target initrd.target. Jul 14 21:55:59.715001 systemd[1]: Started systemd-ask-password-console.path. Jul 14 21:55:59.715009 systemd[1]: Reached target cryptsetup.target. Jul 14 21:55:59.715022 systemd[1]: Reached target paths.target. Jul 14 21:55:59.715030 systemd[1]: Reached target slices.target. Jul 14 21:55:59.715037 systemd[1]: Reached target swap.target. Jul 14 21:55:59.715044 systemd[1]: Reached target timers.target. Jul 14 21:55:59.715052 systemd[1]: Listening on iscsid.socket. Jul 14 21:55:59.715058 systemd[1]: Listening on iscsiuio.socket. Jul 14 21:55:59.715067 systemd[1]: Listening on systemd-journald-audit.socket. Jul 14 21:55:59.715074 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 14 21:55:59.715080 systemd[1]: Listening on systemd-journald.socket. Jul 14 21:55:59.715087 systemd[1]: Listening on systemd-networkd.socket. Jul 14 21:55:59.715094 systemd[1]: Listening on systemd-udevd-control.socket. Jul 14 21:55:59.715101 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 14 21:55:59.715108 systemd[1]: Reached target sockets.target. Jul 14 21:55:59.715115 systemd[1]: Starting kmod-static-nodes.service... Jul 14 21:55:59.715122 systemd[1]: Finished network-cleanup.service. Jul 14 21:55:59.715130 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 21:55:59.715137 systemd[1]: Starting systemd-journald.service... Jul 14 21:55:59.715144 systemd[1]: Starting systemd-modules-load.service... Jul 14 21:55:59.715151 systemd[1]: Starting systemd-resolved.service... Jul 14 21:55:59.715158 systemd[1]: Starting systemd-vconsole-setup.service... Jul 14 21:55:59.715165 systemd[1]: Finished kmod-static-nodes.service. Jul 14 21:55:59.715171 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 21:55:59.715178 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 14 21:55:59.715185 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 14 21:55:59.715194 kernel: audit: type=1130 audit(1752530159.712:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:55:59.715204 systemd-journald[290]: Journal started Jul 14 21:55:59.715241 systemd-journald[290]: Runtime Journal (/run/log/journal/d53429f88761485bb5e404260eda621f) is 6.0M, max 48.7M, 42.6M free. Jul 14 21:55:59.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:55:59.708058 systemd-modules-load[291]: Inserted module 'overlay' Jul 14 21:55:59.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:55:59.717625 systemd[1]: Started systemd-journald.service. Jul 14 21:55:59.717644 kernel: audit: type=1130 audit(1752530159.716:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:55:59.717687 systemd[1]: Finished systemd-vconsole-setup.service. Jul 14 21:55:59.720882 systemd[1]: Starting dracut-cmdline-ask.service... Jul 14 21:55:59.723621 kernel: audit: type=1130 audit(1752530159.719:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:55:59.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:55:59.729315 systemd-resolved[292]: Positive Trust Anchors: Jul 14 21:55:59.729328 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:55:59.729356 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 14 21:55:59.734831 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 14 21:55:59.739078 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 21:55:59.739099 kernel: audit: type=1130 audit(1752530159.736:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:55:59.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:55:59.735671 systemd[1]: Started systemd-resolved.service. Jul 14 21:55:59.740751 kernel: Bridge firewalling registered Jul 14 21:55:59.736986 systemd[1]: Reached target nss-lookup.target. Jul 14 21:55:59.740277 systemd-modules-load[291]: Inserted module 'br_netfilter' Jul 14 21:55:59.743749 systemd[1]: Finished dracut-cmdline-ask.service. Jul 14 21:55:59.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:55:59.745079 systemd[1]: Starting dracut-cmdline.service... Jul 14 21:55:59.747355 kernel: audit: type=1130 audit(1752530159.743:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:55:59.751723 kernel: SCSI subsystem initialized Jul 14 21:55:59.753477 dracut-cmdline[310]: dracut-dracut-053 Jul 14 21:55:59.755592 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0fbac260ee8dcd4db6590eed44229ca41387b27ea0fa758fd2be410620d68236 Jul 14 21:55:59.761277 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 21:55:59.761308 kernel: device-mapper: uevent: version 1.0.3 Jul 14 21:55:59.761318 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 14 21:55:59.763432 systemd-modules-load[291]: Inserted module 'dm_multipath' Jul 14 21:55:59.764187 systemd[1]: Finished systemd-modules-load.service. Jul 14 21:55:59.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:55:59.765440 systemd[1]: Starting systemd-sysctl.service... Jul 14 21:55:59.767938 kernel: audit: type=1130 audit(1752530159.764:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:55:59.773181 systemd[1]: Finished systemd-sysctl.service. Jul 14 21:55:59.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:55:59.776620 kernel: audit: type=1130 audit(1752530159.773:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:55:59.812601 kernel: Loading iSCSI transport class v2.0-870. Jul 14 21:55:59.826613 kernel: iscsi: registered transport (tcp) Jul 14 21:55:59.841615 kernel: iscsi: registered transport (qla4xxx) Jul 14 21:55:59.841650 kernel: QLogic iSCSI HBA Driver Jul 14 21:55:59.873555 systemd[1]: Finished dracut-cmdline.service. Jul 14 21:55:59.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:55:59.874917 systemd[1]: Starting dracut-pre-udev.service... Jul 14 21:55:59.877176 kernel: audit: type=1130 audit(1752530159.873:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:55:59.918616 kernel: raid6: neonx8 gen() 13718 MB/s Jul 14 21:55:59.935602 kernel: raid6: neonx8 xor() 10744 MB/s Jul 14 21:55:59.952600 kernel: raid6: neonx4 gen() 13510 MB/s Jul 14 21:55:59.969604 kernel: raid6: neonx4 xor() 11105 MB/s Jul 14 21:55:59.986604 kernel: raid6: neonx2 gen() 12912 MB/s Jul 14 21:56:00.003609 kernel: raid6: neonx2 xor() 10389 MB/s Jul 14 21:56:00.020607 kernel: raid6: neonx1 gen() 10562 MB/s Jul 14 21:56:00.037604 kernel: raid6: neonx1 xor() 8748 MB/s Jul 14 21:56:00.054600 kernel: raid6: int64x8 gen() 6262 MB/s Jul 14 21:56:00.071595 kernel: raid6: int64x8 xor() 3539 MB/s Jul 14 21:56:00.088607 kernel: raid6: int64x4 gen() 7198 MB/s Jul 14 21:56:00.105602 kernel: raid6: int64x4 xor() 3850 MB/s Jul 14 21:56:00.122612 kernel: raid6: int64x2 gen() 6134 MB/s Jul 14 21:56:00.139603 kernel: raid6: int64x2 xor() 3317 MB/s Jul 14 21:56:00.156604 kernel: raid6: int64x1 gen() 5041 MB/s Jul 14 21:56:00.173757 kernel: raid6: int64x1 xor() 2640 MB/s Jul 14 21:56:00.173779 kernel: raid6: using algorithm neonx8 gen() 13718 MB/s Jul 14 21:56:00.173796 kernel: raid6: .... xor() 10744 MB/s, rmw enabled Jul 14 21:56:00.173812 kernel: raid6: using neon recovery algorithm Jul 14 21:56:00.184947 kernel: xor: measuring software checksum speed Jul 14 21:56:00.184975 kernel: 8regs : 16763 MB/sec Jul 14 21:56:00.184991 kernel: 32regs : 20733 MB/sec Jul 14 21:56:00.185836 kernel: arm64_neon : 26892 MB/sec Jul 14 21:56:00.185853 kernel: xor: using function: arm64_neon (26892 MB/sec) Jul 14 21:56:00.239601 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 14 21:56:00.250287 systemd[1]: Finished dracut-pre-udev.service. Jul 14 21:56:00.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:00.251755 systemd[1]: Starting systemd-udevd.service... Jul 14 21:56:00.254252 kernel: audit: type=1130 audit(1752530160.250:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:00.250000 audit: BPF prog-id=7 op=LOAD Jul 14 21:56:00.250000 audit: BPF prog-id=8 op=LOAD Jul 14 21:56:00.267060 systemd-udevd[491]: Using default interface naming scheme 'v252'. Jul 14 21:56:00.270300 systemd[1]: Started systemd-udevd.service. Jul 14 21:56:00.271602 systemd[1]: Starting dracut-pre-trigger.service... Jul 14 21:56:00.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:00.282852 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Jul 14 21:56:00.308199 systemd[1]: Finished dracut-pre-trigger.service. Jul 14 21:56:00.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:00.309521 systemd[1]: Starting systemd-udev-trigger.service... Jul 14 21:56:00.341290 systemd[1]: Finished systemd-udev-trigger.service. Jul 14 21:56:00.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:00.369601 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 21:56:00.373355 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 21:56:00.373369 kernel: GPT:9289727 != 19775487 Jul 14 21:56:00.373378 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 21:56:00.373386 kernel: GPT:9289727 != 19775487 Jul 14 21:56:00.373394 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 21:56:00.373402 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:56:00.384609 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (557) Jul 14 21:56:00.388594 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 14 21:56:00.391259 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 14 21:56:00.392109 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 14 21:56:00.396164 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 14 21:56:00.399227 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 14 21:56:00.401488 systemd[1]: Starting disk-uuid.service... Jul 14 21:56:00.407397 disk-uuid[564]: Primary Header is updated. Jul 14 21:56:00.407397 disk-uuid[564]: Secondary Entries is updated. Jul 14 21:56:00.407397 disk-uuid[564]: Secondary Header is updated. Jul 14 21:56:00.410609 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:56:00.422613 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:56:01.427605 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:56:01.427705 disk-uuid[565]: The operation has completed successfully. Jul 14 21:56:01.455121 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 21:56:01.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:01.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:01.455215 systemd[1]: Finished disk-uuid.service. Jul 14 21:56:01.456691 systemd[1]: Starting verity-setup.service... Jul 14 21:56:01.471630 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 14 21:56:01.491513 systemd[1]: Found device dev-mapper-usr.device. Jul 14 21:56:01.493488 systemd[1]: Mounting sysusr-usr.mount... Jul 14 21:56:01.496000 systemd[1]: Finished verity-setup.service. Jul 14 21:56:01.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:01.551426 systemd[1]: Mounted sysusr-usr.mount. Jul 14 21:56:01.552526 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 14 21:56:01.552114 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 14 21:56:01.552811 systemd[1]: Starting ignition-setup.service... Jul 14 21:56:01.554543 systemd[1]: Starting parse-ip-for-networkd.service... Jul 14 21:56:01.562787 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:56:01.562823 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:56:01.562836 kernel: BTRFS info (device vda6): has skinny extents Jul 14 21:56:01.570981 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 21:56:01.577135 systemd[1]: Finished ignition-setup.service. Jul 14 21:56:01.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:01.578371 systemd[1]: Starting ignition-fetch-offline.service... Jul 14 21:56:01.629568 systemd[1]: Finished parse-ip-for-networkd.service. Jul 14 21:56:01.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:01.630000 audit: BPF prog-id=9 op=LOAD Jul 14 21:56:01.631457 systemd[1]: Starting systemd-networkd.service... Jul 14 21:56:01.657501 systemd-networkd[740]: lo: Link UP Jul 14 21:56:01.657514 systemd-networkd[740]: lo: Gained carrier Jul 14 21:56:01.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:01.657921 systemd-networkd[740]: Enumeration completed Jul 14 21:56:01.658007 systemd[1]: Started systemd-networkd.service. Jul 14 21:56:01.658122 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:56:01.658836 systemd[1]: Reached target network.target. Jul 14 21:56:01.659484 systemd-networkd[740]: eth0: Link UP Jul 14 21:56:01.659487 systemd-networkd[740]: eth0: Gained carrier Jul 14 21:56:01.660786 systemd[1]: Starting iscsiuio.service... Jul 14 21:56:01.665724 ignition[655]: Ignition 2.14.0 Jul 14 21:56:01.665740 ignition[655]: Stage: fetch-offline Jul 14 21:56:01.665791 ignition[655]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:56:01.665801 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:56:01.665959 ignition[655]: parsed url from cmdline: "" Jul 14 21:56:01.665962 ignition[655]: no config URL provided Jul 14 21:56:01.665967 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 21:56:01.665974 ignition[655]: no config at "/usr/lib/ignition/user.ign" Jul 14 21:56:01.665991 ignition[655]: op(1): [started] loading QEMU firmware config module Jul 14 21:56:01.665996 ignition[655]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 21:56:01.673158 systemd[1]: Started iscsiuio.service. Jul 14 21:56:01.674666 systemd[1]: Starting iscsid.service... Jul 14 21:56:01.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:01.678453 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 14 21:56:01.678453 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 14 21:56:01.678453 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 14 21:56:01.678453 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 14 21:56:01.678453 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 14 21:56:01.678453 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 14 21:56:01.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:01.679664 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.75/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:56:01.682109 ignition[655]: op(1): [finished] loading QEMU firmware config module Jul 14 21:56:01.682096 systemd[1]: Started iscsid.service. Jul 14 21:56:01.684478 systemd[1]: Starting dracut-initqueue.service... Jul 14 21:56:01.694952 systemd[1]: Finished dracut-initqueue.service. Jul 14 21:56:01.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:01.695816 systemd[1]: Reached target remote-fs-pre.target. Jul 14 21:56:01.697085 systemd[1]: Reached target remote-cryptsetup.target. Jul 14 21:56:01.698402 systemd[1]: Reached target remote-fs.target. Jul 14 21:56:01.700357 systemd[1]: Starting dracut-pre-mount.service... Jul 14 21:56:01.708118 systemd[1]: Finished dracut-pre-mount.service. Jul 14 21:56:01.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:01.732805 ignition[655]: parsing config with SHA512: 1f4865abcadf45c5371449e0eedb8dfc3e633e4a45bce2ddc201ba721f679468964f60655610b66cf61070204e409df67b05f1cb1c4dd3d02749ce7e29477cc7 Jul 14 21:56:01.738936 unknown[655]: fetched base config from "system" Jul 14 21:56:01.738949 unknown[655]: fetched user config from "qemu" Jul 14 21:56:01.739480 ignition[655]: fetch-offline: fetch-offline passed Jul 14 21:56:01.739535 ignition[655]: Ignition finished successfully Jul 14 21:56:01.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:01.740972 systemd[1]: Finished ignition-fetch-offline.service. Jul 14 21:56:01.742100 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 21:56:01.742834 systemd[1]: Starting ignition-kargs.service... Jul 14 21:56:01.752092 ignition[762]: Ignition 2.14.0 Jul 14 21:56:01.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:01.753408 systemd[1]: Finished ignition-kargs.service. Jul 14 21:56:01.752107 ignition[762]: Stage: kargs Jul 14 21:56:01.754784 systemd[1]: Starting ignition-disks.service... Jul 14 21:56:01.752197 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:56:01.752207 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:56:01.753069 ignition[762]: kargs: kargs passed Jul 14 21:56:01.753110 ignition[762]: Ignition finished successfully Jul 14 21:56:01.760837 ignition[768]: Ignition 2.14.0 Jul 14 21:56:01.760847 ignition[768]: Stage: disks Jul 14 21:56:01.760929 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:56:01.760938 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:56:01.763165 systemd[1]: Finished ignition-disks.service. Jul 14 21:56:01.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:01.761859 ignition[768]: disks: disks passed Jul 14 21:56:01.764452 systemd[1]: Reached target initrd-root-device.target. Jul 14 21:56:01.761898 ignition[768]: Ignition finished successfully Jul 14 21:56:01.765390 systemd[1]: Reached target local-fs-pre.target. Jul 14 21:56:01.766256 systemd[1]: Reached target local-fs.target. Jul 14 21:56:01.767244 systemd[1]: Reached target sysinit.target. Jul 14 21:56:01.768150 systemd[1]: Reached target basic.target. Jul 14 21:56:01.769835 systemd[1]: Starting systemd-fsck-root.service... Jul 14 21:56:01.782801 systemd-fsck[776]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 14 21:56:01.785918 systemd[1]: Finished systemd-fsck-root.service. Jul 14 21:56:01.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:01.788418 systemd[1]: Mounting sysroot.mount... Jul 14 21:56:01.794359 systemd[1]: Mounted sysroot.mount. Jul 14 21:56:01.795330 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 14 21:56:01.794982 systemd[1]: Reached target initrd-root-fs.target. Jul 14 21:56:01.798473 systemd[1]: Mounting sysroot-usr.mount... Jul 14 21:56:01.799213 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 14 21:56:01.799248 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 21:56:01.799271 systemd[1]: Reached target ignition-diskful.target. Jul 14 21:56:01.800874 systemd[1]: Mounted sysroot-usr.mount. Jul 14 21:56:01.802033 systemd[1]: Starting initrd-setup-root.service... Jul 14 21:56:01.806092 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 21:56:01.810053 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory Jul 14 21:56:01.813845 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 21:56:01.817350 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 21:56:01.841210 systemd[1]: Finished initrd-setup-root.service. Jul 14 21:56:01.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:01.842492 systemd[1]: Starting ignition-mount.service... Jul 14 21:56:01.843641 systemd[1]: Starting sysroot-boot.service... Jul 14 21:56:01.848016 bash[827]: umount: /sysroot/usr/share/oem: not mounted. Jul 14 21:56:01.855839 ignition[829]: INFO : Ignition 2.14.0 Jul 14 21:56:01.855839 ignition[829]: INFO : Stage: mount Jul 14 21:56:01.857097 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:56:01.857097 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:56:01.857097 ignition[829]: INFO : mount: mount passed Jul 14 21:56:01.857097 ignition[829]: INFO : Ignition finished successfully Jul 14 21:56:01.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:01.858502 systemd[1]: Finished ignition-mount.service. Jul 14 21:56:01.864279 systemd[1]: Finished sysroot-boot.service. Jul 14 21:56:01.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:02.505648 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 14 21:56:02.514614 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (837) Jul 14 21:56:02.515951 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:56:02.515964 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:56:02.515973 kernel: BTRFS info (device vda6): has skinny extents Jul 14 21:56:02.519189 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 14 21:56:02.520522 systemd[1]: Starting ignition-files.service... Jul 14 21:56:02.534060 ignition[857]: INFO : Ignition 2.14.0 Jul 14 21:56:02.534060 ignition[857]: INFO : Stage: files Jul 14 21:56:02.535230 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:56:02.535230 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:56:02.535230 ignition[857]: DEBUG : files: compiled without relabeling support, skipping Jul 14 21:56:02.539976 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 21:56:02.539976 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 21:56:02.543272 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 21:56:02.544322 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 21:56:02.544322 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 21:56:02.543952 unknown[857]: wrote ssh authorized keys file for user: core Jul 14 21:56:02.547419 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 21:56:02.547419 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 21:56:02.547419 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 14 21:56:02.547419 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 14 21:56:02.946797 systemd-networkd[740]: eth0: Gained IPv6LL Jul 14 21:56:12.667690 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 21:56:12.809684 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 14 21:56:12.811193 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 14 21:56:12.811193 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 21:56:12.811193 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:56:12.811193 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:56:12.811193 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:56:12.811193 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:56:12.811193 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:56:12.811193 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:56:12.811193 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:56:12.811193 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:56:12.811193 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 21:56:12.811193 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 21:56:12.811193 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 21:56:12.811193 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 14 21:56:43.484206 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 14 21:56:43.923842 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 21:56:43.923842 ignition[857]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 14 21:56:43.926284 ignition[857]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 21:56:43.931101 ignition[857]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 21:56:43.931101 ignition[857]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 14 21:56:43.931101 ignition[857]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 14 21:56:43.931101 ignition[857]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:56:43.931101 ignition[857]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:56:43.931101 ignition[857]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 14 21:56:43.931101 ignition[857]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 14 21:56:43.931101 ignition[857]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:56:43.931101 ignition[857]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:56:43.931101 ignition[857]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 14 21:56:43.931101 ignition[857]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 14 21:56:43.931101 ignition[857]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 21:56:43.931101 ignition[857]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 21:56:43.931101 ignition[857]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:56:43.981498 ignition[857]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:56:43.982705 ignition[857]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 21:56:43.982705 ignition[857]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:56:43.982705 ignition[857]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:56:43.982705 ignition[857]: INFO : files: files passed Jul 14 21:56:43.982705 ignition[857]: INFO : Ignition finished successfully Jul 14 21:56:43.986044 systemd[1]: Finished ignition-files.service. Jul 14 21:56:44.002313 kernel: kauditd_printk_skb: 23 callbacks suppressed Jul 14 21:56:44.002334 kernel: audit: type=1130 audit(1752530203.986:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:43.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:43.987516 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 14 21:56:43.988489 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 14 21:56:44.004867 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 14 21:56:44.001546 systemd[1]: Starting ignition-quench.service... Jul 14 21:56:44.007334 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 21:56:44.007439 systemd[1]: Finished ignition-quench.service. Jul 14 21:56:44.009627 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:56:44.014693 kernel: audit: type=1130 audit(1752530204.009:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.014714 kernel: audit: type=1131 audit(1752530204.009:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.011103 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 14 21:56:44.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.015388 systemd[1]: Reached target ignition-complete.target. Jul 14 21:56:44.018776 kernel: audit: type=1130 audit(1752530204.014:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.021624 systemd[1]: Starting initrd-parse-etc.service... Jul 14 21:56:44.034831 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 21:56:44.034928 systemd[1]: Finished initrd-parse-etc.service. Jul 14 21:56:44.040156 kernel: audit: type=1130 audit(1752530204.035:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.040175 kernel: audit: type=1131 audit(1752530204.035:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.036331 systemd[1]: Reached target initrd-fs.target. Jul 14 21:56:44.041079 systemd[1]: Reached target initrd.target. Jul 14 21:56:44.042039 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 14 21:56:44.042993 systemd[1]: Starting dracut-pre-pivot.service... Jul 14 21:56:44.052669 systemd[1]: Finished dracut-pre-pivot.service. Jul 14 21:56:44.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.054119 systemd[1]: Starting initrd-cleanup.service... Jul 14 21:56:44.056410 kernel: audit: type=1130 audit(1752530204.052:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.062820 systemd[1]: Stopped target nss-lookup.target. Jul 14 21:56:44.063465 systemd[1]: Stopped target remote-cryptsetup.target. Jul 14 21:56:44.065783 systemd[1]: Stopped target timers.target. Jul 14 21:56:44.066786 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 21:56:44.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.066897 systemd[1]: Stopped dracut-pre-pivot.service. Jul 14 21:56:44.071040 kernel: audit: type=1131 audit(1752530204.067:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.067887 systemd[1]: Stopped target initrd.target. Jul 14 21:56:44.070555 systemd[1]: Stopped target basic.target. Jul 14 21:56:44.071589 systemd[1]: Stopped target ignition-complete.target. Jul 14 21:56:44.072624 systemd[1]: Stopped target ignition-diskful.target. Jul 14 21:56:44.073624 systemd[1]: Stopped target initrd-root-device.target. Jul 14 21:56:44.074843 systemd[1]: Stopped target remote-fs.target. Jul 14 21:56:44.075853 systemd[1]: Stopped target remote-fs-pre.target. Jul 14 21:56:44.076938 systemd[1]: Stopped target sysinit.target. Jul 14 21:56:44.077863 systemd[1]: Stopped target local-fs.target. Jul 14 21:56:44.078841 systemd[1]: Stopped target local-fs-pre.target. Jul 14 21:56:44.079797 systemd[1]: Stopped target swap.target. Jul 14 21:56:44.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.080683 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 21:56:44.084758 kernel: audit: type=1131 audit(1752530204.081:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.080781 systemd[1]: Stopped dracut-pre-mount.service. Jul 14 21:56:44.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.081745 systemd[1]: Stopped target cryptsetup.target. Jul 14 21:56:44.088510 kernel: audit: type=1131 audit(1752530204.084:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.084614 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 21:56:44.084717 systemd[1]: Stopped dracut-initqueue.service. Jul 14 21:56:44.085402 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 21:56:44.085496 systemd[1]: Stopped ignition-fetch-offline.service. Jul 14 21:56:44.088176 systemd[1]: Stopped target paths.target. Jul 14 21:56:44.089068 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 21:56:44.092002 systemd[1]: Stopped systemd-ask-password-console.path. Jul 14 21:56:44.092716 systemd[1]: Stopped target slices.target. Jul 14 21:56:44.093902 systemd[1]: Stopped target sockets.target. Jul 14 21:56:44.095027 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 21:56:44.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.095095 systemd[1]: Closed iscsid.socket. Jul 14 21:56:44.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.095911 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 21:56:44.096011 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 14 21:56:44.096972 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 21:56:44.097058 systemd[1]: Stopped ignition-files.service. Jul 14 21:56:44.098789 systemd[1]: Stopping ignition-mount.service... Jul 14 21:56:44.099785 systemd[1]: Stopping iscsiuio.service... Jul 14 21:56:44.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.101327 systemd[1]: Stopping sysroot-boot.service... Jul 14 21:56:44.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.102338 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 21:56:44.102504 systemd[1]: Stopped systemd-udev-trigger.service. Jul 14 21:56:44.103531 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 21:56:44.103687 systemd[1]: Stopped dracut-pre-trigger.service. Jul 14 21:56:44.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.107855 ignition[898]: INFO : Ignition 2.14.0 Jul 14 21:56:44.107855 ignition[898]: INFO : Stage: umount Jul 14 21:56:44.106662 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 14 21:56:44.109652 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:56:44.109652 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:56:44.109652 ignition[898]: INFO : umount: umount passed Jul 14 21:56:44.109652 ignition[898]: INFO : Ignition finished successfully Jul 14 21:56:44.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.106754 systemd[1]: Stopped iscsiuio.service. Jul 14 21:56:44.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.109575 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 21:56:44.109703 systemd[1]: Closed iscsiuio.socket. Jul 14 21:56:44.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.112038 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 21:56:44.112176 systemd[1]: Finished initrd-cleanup.service. Jul 14 21:56:44.115178 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 21:56:44.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.115536 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 21:56:44.115671 systemd[1]: Stopped ignition-mount.service. Jul 14 21:56:44.116354 systemd[1]: Stopped target network.target. Jul 14 21:56:44.117154 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 21:56:44.117200 systemd[1]: Stopped ignition-disks.service. Jul 14 21:56:44.119245 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 21:56:44.119289 systemd[1]: Stopped ignition-kargs.service. Jul 14 21:56:44.121471 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 21:56:44.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.121512 systemd[1]: Stopped ignition-setup.service. Jul 14 21:56:44.122484 systemd[1]: Stopping systemd-networkd.service... Jul 14 21:56:44.124358 systemd[1]: Stopping systemd-resolved.service... Jul 14 21:56:44.127618 systemd-networkd[740]: eth0: DHCPv6 lease lost Jul 14 21:56:44.129668 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 21:56:44.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.138000 audit: BPF prog-id=9 op=UNLOAD Jul 14 21:56:44.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.129764 systemd[1]: Stopped systemd-networkd.service. Jul 14 21:56:44.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.131727 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 21:56:44.131758 systemd[1]: Closed systemd-networkd.socket. Jul 14 21:56:44.133915 systemd[1]: Stopping network-cleanup.service... Jul 14 21:56:44.136877 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 21:56:44.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.136950 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 14 21:56:44.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.137988 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 21:56:44.138025 systemd[1]: Stopped systemd-sysctl.service. Jul 14 21:56:44.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.139762 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 21:56:44.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.139801 systemd[1]: Stopped systemd-modules-load.service. Jul 14 21:56:44.148000 audit: BPF prog-id=6 op=UNLOAD Jul 14 21:56:44.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.140526 systemd[1]: Stopping systemd-udevd.service... Jul 14 21:56:44.142292 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 14 21:56:44.142867 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 21:56:44.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.142976 systemd[1]: Stopped systemd-resolved.service. Jul 14 21:56:44.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.144132 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 21:56:44.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.144214 systemd[1]: Stopped sysroot-boot.service. Jul 14 21:56:44.145623 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 21:56:44.145671 systemd[1]: Stopped initrd-setup-root.service. Jul 14 21:56:44.146923 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 21:56:44.147045 systemd[1]: Stopped systemd-udevd.service. Jul 14 21:56:44.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.148044 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 21:56:44.148126 systemd[1]: Stopped network-cleanup.service. Jul 14 21:56:44.149095 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 21:56:44.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:44.149127 systemd[1]: Closed systemd-udevd-control.socket. Jul 14 21:56:44.150040 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 21:56:44.150066 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 14 21:56:44.151024 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 21:56:44.151064 systemd[1]: Stopped dracut-pre-udev.service. Jul 14 21:56:44.152423 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 21:56:44.152460 systemd[1]: Stopped dracut-cmdline.service. Jul 14 21:56:44.153500 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:56:44.153533 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 14 21:56:44.155239 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 14 21:56:44.156181 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 21:56:44.156234 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 14 21:56:44.157935 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 21:56:44.157980 systemd[1]: Stopped kmod-static-nodes.service. Jul 14 21:56:44.158602 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:56:44.172000 audit: BPF prog-id=8 op=UNLOAD Jul 14 21:56:44.172000 audit: BPF prog-id=7 op=UNLOAD Jul 14 21:56:44.158637 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 14 21:56:44.173000 audit: BPF prog-id=5 op=UNLOAD Jul 14 21:56:44.173000 audit: BPF prog-id=4 op=UNLOAD Jul 14 21:56:44.173000 audit: BPF prog-id=3 op=UNLOAD Jul 14 21:56:44.160745 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 14 21:56:44.161228 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 21:56:44.161308 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 14 21:56:44.162635 systemd[1]: Reached target initrd-switch-root.target. Jul 14 21:56:44.164346 systemd[1]: Starting initrd-switch-root.service... Jul 14 21:56:44.170627 systemd[1]: Switching root. Jul 14 21:56:44.191802 iscsid[746]: iscsid shutting down. Jul 14 21:56:44.192308 systemd-journald[290]: Journal stopped Jul 14 21:56:46.196032 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Jul 14 21:56:46.196096 kernel: SELinux: Class mctp_socket not defined in policy. Jul 14 21:56:46.196110 kernel: SELinux: Class anon_inode not defined in policy. Jul 14 21:56:46.196121 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 14 21:56:46.196131 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 21:56:46.196140 kernel: SELinux: policy capability open_perms=1 Jul 14 21:56:46.196150 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 21:56:46.196163 kernel: SELinux: policy capability always_check_network=0 Jul 14 21:56:46.196173 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 21:56:46.196184 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 21:56:46.196194 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 21:56:46.196203 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 21:56:46.196214 systemd[1]: Successfully loaded SELinux policy in 37.786ms. Jul 14 21:56:46.196232 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.101ms. Jul 14 21:56:46.196244 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 14 21:56:46.196256 systemd[1]: Detected virtualization kvm. Jul 14 21:56:46.196267 systemd[1]: Detected architecture arm64. Jul 14 21:56:46.196280 systemd[1]: Detected first boot. Jul 14 21:56:46.196290 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:56:46.196301 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 14 21:56:46.196313 systemd[1]: Populated /etc with preset unit settings. Jul 14 21:56:46.196325 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 21:56:46.196338 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 21:56:46.196350 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:56:46.196360 systemd[1]: Queued start job for default target multi-user.target. Jul 14 21:56:46.196371 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 14 21:56:46.196381 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 14 21:56:46.196392 systemd[1]: Created slice system-addon\x2drun.slice. Jul 14 21:56:46.196403 systemd[1]: Created slice system-getty.slice. Jul 14 21:56:46.196414 systemd[1]: Created slice system-modprobe.slice. Jul 14 21:56:46.196425 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 14 21:56:46.196436 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 14 21:56:46.196447 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 14 21:56:46.196457 systemd[1]: Created slice user.slice. Jul 14 21:56:46.196467 systemd[1]: Started systemd-ask-password-console.path. Jul 14 21:56:46.196478 systemd[1]: Started systemd-ask-password-wall.path. Jul 14 21:56:46.196488 systemd[1]: Set up automount boot.automount. Jul 14 21:56:46.196500 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 14 21:56:46.196511 systemd[1]: Reached target integritysetup.target. Jul 14 21:56:46.196521 systemd[1]: Reached target remote-cryptsetup.target. Jul 14 21:56:46.196532 systemd[1]: Reached target remote-fs.target. Jul 14 21:56:46.196543 systemd[1]: Reached target slices.target. Jul 14 21:56:46.196553 systemd[1]: Reached target swap.target. Jul 14 21:56:46.196564 systemd[1]: Reached target torcx.target. Jul 14 21:56:46.196574 systemd[1]: Reached target veritysetup.target. Jul 14 21:56:46.196603 systemd[1]: Listening on systemd-coredump.socket. Jul 14 21:56:46.196618 systemd[1]: Listening on systemd-initctl.socket. Jul 14 21:56:46.196629 systemd[1]: Listening on systemd-journald-audit.socket. Jul 14 21:56:46.196640 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 14 21:56:46.196650 systemd[1]: Listening on systemd-journald.socket. Jul 14 21:56:46.196661 systemd[1]: Listening on systemd-networkd.socket. Jul 14 21:56:46.196672 systemd[1]: Listening on systemd-udevd-control.socket. Jul 14 21:56:46.196683 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 14 21:56:46.196693 systemd[1]: Listening on systemd-userdbd.socket. Jul 14 21:56:46.196703 systemd[1]: Mounting dev-hugepages.mount... Jul 14 21:56:46.196715 systemd[1]: Mounting dev-mqueue.mount... Jul 14 21:56:46.196726 systemd[1]: Mounting media.mount... Jul 14 21:56:46.196736 systemd[1]: Mounting sys-kernel-debug.mount... Jul 14 21:56:46.196747 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 14 21:56:46.196757 systemd[1]: Mounting tmp.mount... Jul 14 21:56:46.196770 systemd[1]: Starting flatcar-tmpfiles.service... Jul 14 21:56:46.196782 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:56:46.196792 systemd[1]: Starting kmod-static-nodes.service... Jul 14 21:56:46.196803 systemd[1]: Starting modprobe@configfs.service... Jul 14 21:56:46.196815 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:56:46.196825 systemd[1]: Starting modprobe@drm.service... Jul 14 21:56:46.196837 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:56:46.196847 systemd[1]: Starting modprobe@fuse.service... Jul 14 21:56:46.196857 systemd[1]: Starting modprobe@loop.service... Jul 14 21:56:46.196868 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 21:56:46.196886 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 14 21:56:46.196899 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 14 21:56:46.196910 systemd[1]: Starting systemd-journald.service... Jul 14 21:56:46.196923 kernel: loop: module loaded Jul 14 21:56:46.196933 systemd[1]: Starting systemd-modules-load.service... Jul 14 21:56:46.196944 systemd[1]: Starting systemd-network-generator.service... Jul 14 21:56:46.196955 systemd[1]: Starting systemd-remount-fs.service... Jul 14 21:56:46.196965 systemd[1]: Starting systemd-udev-trigger.service... Jul 14 21:56:46.196976 systemd[1]: Mounted dev-hugepages.mount. Jul 14 21:56:46.196986 systemd[1]: Mounted dev-mqueue.mount. Jul 14 21:56:46.196997 systemd[1]: Mounted media.mount. Jul 14 21:56:46.197007 systemd[1]: Mounted sys-kernel-debug.mount. Jul 14 21:56:46.197019 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 14 21:56:46.197029 systemd[1]: Mounted tmp.mount. Jul 14 21:56:46.197040 systemd[1]: Finished kmod-static-nodes.service. Jul 14 21:56:46.197050 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 21:56:46.197061 systemd[1]: Finished modprobe@configfs.service. Jul 14 21:56:46.197072 kernel: fuse: init (API version 7.34) Jul 14 21:56:46.197082 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:56:46.197093 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:56:46.197103 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:56:46.197115 systemd[1]: Finished modprobe@drm.service. Jul 14 21:56:46.197126 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:56:46.197136 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:56:46.197147 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 21:56:46.197157 systemd[1]: Finished modprobe@fuse.service. Jul 14 21:56:46.197167 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:56:46.197178 systemd[1]: Finished modprobe@loop.service. Jul 14 21:56:46.197191 systemd-journald[1027]: Journal started Jul 14 21:56:46.197235 systemd-journald[1027]: Runtime Journal (/run/log/journal/d53429f88761485bb5e404260eda621f) is 6.0M, max 48.7M, 42.6M free. Jul 14 21:56:46.107000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 14 21:56:46.107000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 14 21:56:46.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.194000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 14 21:56:46.194000 audit[1027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffff29c460 a2=4000 a3=1 items=0 ppid=1 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:46.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.194000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 14 21:56:46.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.198640 systemd[1]: Finished systemd-modules-load.service. Jul 14 21:56:46.201337 systemd[1]: Started systemd-journald.service. Jul 14 21:56:46.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.200805 systemd[1]: Finished systemd-network-generator.service. Jul 14 21:56:46.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.201820 systemd[1]: Finished systemd-remount-fs.service. Jul 14 21:56:46.202931 systemd[1]: Reached target network-pre.target. Jul 14 21:56:46.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.204500 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 14 21:56:46.206124 systemd[1]: Mounting sys-kernel-config.mount... Jul 14 21:56:46.206685 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 21:56:46.208265 systemd[1]: Starting systemd-hwdb-update.service... Jul 14 21:56:46.210291 systemd[1]: Starting systemd-journal-flush.service... Jul 14 21:56:46.210933 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:56:46.211960 systemd[1]: Starting systemd-random-seed.service... Jul 14 21:56:46.212936 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 21:56:46.214420 systemd[1]: Starting systemd-sysctl.service... Jul 14 21:56:46.216199 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 14 21:56:46.217056 systemd[1]: Mounted sys-kernel-config.mount. Jul 14 21:56:46.224347 systemd-journald[1027]: Time spent on flushing to /var/log/journal/d53429f88761485bb5e404260eda621f is 11.880ms for 932 entries. Jul 14 21:56:46.224347 systemd-journald[1027]: System Journal (/var/log/journal/d53429f88761485bb5e404260eda621f) is 8.0M, max 195.6M, 187.6M free. Jul 14 21:56:46.246717 systemd-journald[1027]: Received client request to flush runtime journal. Jul 14 21:56:46.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.224913 systemd[1]: Finished systemd-random-seed.service. Jul 14 21:56:46.226355 systemd[1]: Reached target first-boot-complete.target. Jul 14 21:56:46.237298 systemd[1]: Finished systemd-udev-trigger.service. Jul 14 21:56:46.238232 systemd[1]: Finished systemd-sysctl.service. Jul 14 21:56:46.239084 systemd[1]: Finished flatcar-tmpfiles.service. Jul 14 21:56:46.240798 systemd[1]: Starting systemd-sysusers.service... Jul 14 21:56:46.242385 systemd[1]: Starting systemd-udev-settle.service... Jul 14 21:56:46.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.247808 systemd[1]: Finished systemd-journal-flush.service. Jul 14 21:56:46.252783 udevadm[1085]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 14 21:56:46.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.257798 systemd[1]: Finished systemd-sysusers.service. Jul 14 21:56:46.259449 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 14 21:56:46.275646 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 14 21:56:46.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.571243 systemd[1]: Finished systemd-hwdb-update.service. Jul 14 21:56:46.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.573106 systemd[1]: Starting systemd-udevd.service... Jul 14 21:56:46.590693 systemd-udevd[1093]: Using default interface naming scheme 'v252'. Jul 14 21:56:46.601644 systemd[1]: Started systemd-udevd.service. Jul 14 21:56:46.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.604936 systemd[1]: Starting systemd-networkd.service... Jul 14 21:56:46.611889 systemd[1]: Starting systemd-userdbd.service... Jul 14 21:56:46.616508 systemd[1]: Found device dev-ttyAMA0.device. Jul 14 21:56:46.653039 systemd[1]: Started systemd-userdbd.service. Jul 14 21:56:46.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.666321 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 14 21:56:46.705012 systemd[1]: Finished systemd-udev-settle.service. Jul 14 21:56:46.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.706764 systemd[1]: Starting lvm2-activation-early.service... Jul 14 21:56:46.722786 lvm[1127]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:56:46.727462 systemd-networkd[1104]: lo: Link UP Jul 14 21:56:46.727475 systemd-networkd[1104]: lo: Gained carrier Jul 14 21:56:46.727870 systemd-networkd[1104]: Enumeration completed Jul 14 21:56:46.727990 systemd-networkd[1104]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:56:46.728005 systemd[1]: Started systemd-networkd.service. Jul 14 21:56:46.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.729767 systemd-networkd[1104]: eth0: Link UP Jul 14 21:56:46.729778 systemd-networkd[1104]: eth0: Gained carrier Jul 14 21:56:46.746414 systemd[1]: Finished lvm2-activation-early.service. Jul 14 21:56:46.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.747179 systemd[1]: Reached target cryptsetup.target. Jul 14 21:56:46.748679 systemd-networkd[1104]: eth0: DHCPv4 address 10.0.0.75/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:56:46.749076 systemd[1]: Starting lvm2-activation.service... Jul 14 21:56:46.752601 lvm[1129]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:56:46.786550 systemd[1]: Finished lvm2-activation.service. Jul 14 21:56:46.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.787306 systemd[1]: Reached target local-fs-pre.target. Jul 14 21:56:46.787963 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 21:56:46.787990 systemd[1]: Reached target local-fs.target. Jul 14 21:56:46.788535 systemd[1]: Reached target machines.target. Jul 14 21:56:46.790276 systemd[1]: Starting ldconfig.service... Jul 14 21:56:46.791254 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:56:46.791303 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:56:46.792651 systemd[1]: Starting systemd-boot-update.service... Jul 14 21:56:46.794276 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 14 21:56:46.796088 systemd[1]: Starting systemd-machine-id-commit.service... Jul 14 21:56:46.798495 systemd[1]: Starting systemd-sysext.service... Jul 14 21:56:46.804671 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1132 (bootctl) Jul 14 21:56:46.805904 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 14 21:56:46.814630 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 14 21:56:46.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.818648 systemd[1]: Unmounting usr-share-oem.mount... Jul 14 21:56:46.823188 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 14 21:56:46.823446 systemd[1]: Unmounted usr-share-oem.mount. Jul 14 21:56:46.872597 kernel: loop0: detected capacity change from 0 to 203944 Jul 14 21:56:46.872862 systemd[1]: Finished systemd-machine-id-commit.service. Jul 14 21:56:46.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.875665 systemd-fsck[1140]: fsck.fat 4.2 (2021-01-31) Jul 14 21:56:46.875665 systemd-fsck[1140]: /dev/vda1: 236 files, 117310/258078 clusters Jul 14 21:56:46.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.878355 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 14 21:56:46.887093 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 21:56:46.908708 kernel: loop1: detected capacity change from 0 to 203944 Jul 14 21:56:46.913615 (sd-sysext)[1150]: Using extensions 'kubernetes'. Jul 14 21:56:46.914232 (sd-sysext)[1150]: Merged extensions into '/usr'. Jul 14 21:56:46.928908 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:56:46.930182 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:56:46.932191 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:56:46.934095 systemd[1]: Starting modprobe@loop.service... Jul 14 21:56:46.934933 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:56:46.935057 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:56:46.935811 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:56:46.935976 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:56:46.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.937210 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:56:46.937377 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:56:46.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.938575 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:56:46.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:46.938743 systemd[1]: Finished modprobe@loop.service. Jul 14 21:56:46.939891 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:56:46.940006 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 21:56:46.988117 ldconfig[1131]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 21:56:46.991409 systemd[1]: Finished ldconfig.service. Jul 14 21:56:46.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.168951 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 21:56:47.170855 systemd[1]: Mounting boot.mount... Jul 14 21:56:47.172618 systemd[1]: Mounting usr-share-oem.mount... Jul 14 21:56:47.179105 systemd[1]: Mounted boot.mount. Jul 14 21:56:47.179865 systemd[1]: Mounted usr-share-oem.mount. Jul 14 21:56:47.181684 systemd[1]: Finished systemd-sysext.service. Jul 14 21:56:47.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.183612 systemd[1]: Starting ensure-sysext.service... Jul 14 21:56:47.185545 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 14 21:56:47.189339 systemd[1]: Finished systemd-boot-update.service. Jul 14 21:56:47.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.191656 systemd[1]: Reloading. Jul 14 21:56:47.195118 systemd-tmpfiles[1167]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 14 21:56:47.195828 systemd-tmpfiles[1167]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 21:56:47.197276 systemd-tmpfiles[1167]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 21:56:47.227404 /usr/lib/systemd/system-generators/torcx-generator[1188]: time="2025-07-14T21:56:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 21:56:47.227437 /usr/lib/systemd/system-generators/torcx-generator[1188]: time="2025-07-14T21:56:47Z" level=info msg="torcx already run" Jul 14 21:56:47.295308 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 21:56:47.295327 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 21:56:47.312904 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:56:47.353015 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 14 21:56:47.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.356813 systemd[1]: Starting audit-rules.service... Jul 14 21:56:47.358453 systemd[1]: Starting clean-ca-certificates.service... Jul 14 21:56:47.360672 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 14 21:56:47.362806 systemd[1]: Starting systemd-resolved.service... Jul 14 21:56:47.365173 systemd[1]: Starting systemd-timesyncd.service... Jul 14 21:56:47.367111 systemd[1]: Starting systemd-update-utmp.service... Jul 14 21:56:47.368810 systemd[1]: Finished clean-ca-certificates.service. Jul 14 21:56:47.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.371000 audit[1242]: SYSTEM_BOOT pid=1242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.374334 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:56:47.377045 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:56:47.378436 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:56:47.380490 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:56:47.382921 systemd[1]: Starting modprobe@loop.service... Jul 14 21:56:47.383618 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:56:47.383763 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:56:47.383895 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:56:47.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.384842 systemd[1]: Finished systemd-update-utmp.service. Jul 14 21:56:47.385994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:56:47.386135 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:56:47.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.387268 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:56:47.387404 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:56:47.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.390165 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:56:47.392238 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 14 21:56:47.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.393760 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:56:47.394067 systemd[1]: Finished modprobe@loop.service. Jul 14 21:56:47.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.395133 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:56:47.396297 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:56:47.398418 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:56:47.399204 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:56:47.399345 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:56:47.401087 systemd[1]: Starting systemd-update-done.service... Jul 14 21:56:47.402470 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:56:47.403459 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:56:47.403856 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:56:47.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.404976 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 21:56:47.408981 systemd[1]: Finished systemd-update-done.service. Jul 14 21:56:47.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.411808 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:56:47.412290 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:56:47.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.415387 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 21:56:47.416798 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 21:56:47.418548 systemd[1]: Starting modprobe@drm.service... Jul 14 21:56:47.420298 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 21:56:47.422638 systemd[1]: Starting modprobe@loop.service... Jul 14 21:56:47.423332 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 21:56:47.423479 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:56:47.428863 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 14 21:56:47.429920 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:56:47.431251 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:56:47.431435 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 21:56:47.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.433499 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:56:47.433674 systemd[1]: Finished modprobe@drm.service. Jul 14 21:56:47.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.434922 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:56:47.435133 systemd[1]: Finished modprobe@loop.service. Jul 14 21:56:47.436458 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 21:56:47.437806 systemd[1]: Finished ensure-sysext.service. Jul 14 21:56:47.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.439114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:56:47.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.439346 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 21:56:47.440442 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:56:47.449861 systemd[1]: Started systemd-timesyncd.service. Jul 14 21:56:47.451084 systemd-resolved[1240]: Positive Trust Anchors: Jul 14 21:56:47.451096 systemd-resolved[1240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:56:47.451123 systemd-resolved[1240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 14 21:56:47.451381 systemd-timesyncd[1241]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 21:56:47.451428 systemd-timesyncd[1241]: Initial clock synchronization to Mon 2025-07-14 21:56:47.134029 UTC. Jul 14 21:56:47.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:47.452081 systemd[1]: Reached target time-set.target. Jul 14 21:56:47.455000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 14 21:56:47.455000 audit[1284]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffa5e7490 a2=420 a3=0 items=0 ppid=1235 pid=1284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:47.455000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 14 21:56:47.456429 augenrules[1284]: No rules Jul 14 21:56:47.457875 systemd[1]: Finished audit-rules.service. Jul 14 21:56:47.463682 systemd-resolved[1240]: Defaulting to hostname 'linux'. Jul 14 21:56:47.465303 systemd[1]: Started systemd-resolved.service. Jul 14 21:56:47.466032 systemd[1]: Reached target network.target. Jul 14 21:56:47.466606 systemd[1]: Reached target nss-lookup.target. Jul 14 21:56:47.467191 systemd[1]: Reached target sysinit.target. Jul 14 21:56:47.467828 systemd[1]: Started motdgen.path. Jul 14 21:56:47.468361 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 14 21:56:47.469353 systemd[1]: Started logrotate.timer. Jul 14 21:56:47.470048 systemd[1]: Started mdadm.timer. Jul 14 21:56:47.470555 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 14 21:56:47.471220 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 21:56:47.471250 systemd[1]: Reached target paths.target. Jul 14 21:56:47.471798 systemd[1]: Reached target timers.target. Jul 14 21:56:47.472731 systemd[1]: Listening on dbus.socket. Jul 14 21:56:47.474625 systemd[1]: Starting docker.socket... Jul 14 21:56:47.476572 systemd[1]: Listening on sshd.socket. Jul 14 21:56:47.477412 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:56:47.477860 systemd[1]: Listening on docker.socket. Jul 14 21:56:47.478485 systemd[1]: Reached target sockets.target. Jul 14 21:56:47.479151 systemd[1]: Reached target basic.target. Jul 14 21:56:47.479904 systemd[1]: System is tainted: cgroupsv1 Jul 14 21:56:47.479958 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 14 21:56:47.479986 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 14 21:56:47.481291 systemd[1]: Starting containerd.service... Jul 14 21:56:47.483343 systemd[1]: Starting dbus.service... Jul 14 21:56:47.485417 systemd[1]: Starting enable-oem-cloudinit.service... Jul 14 21:56:47.487707 systemd[1]: Starting extend-filesystems.service... Jul 14 21:56:47.488498 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 14 21:56:47.490244 systemd[1]: Starting motdgen.service... Jul 14 21:56:47.492457 systemd[1]: Starting prepare-helm.service... Jul 14 21:56:47.495270 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 14 21:56:47.497543 systemd[1]: Starting sshd-keygen.service... Jul 14 21:56:47.500686 systemd[1]: Starting systemd-logind.service... Jul 14 21:56:47.509187 jq[1295]: false Jul 14 21:56:47.501504 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 21:56:47.501729 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 21:56:47.503555 systemd[1]: Starting update-engine.service... Jul 14 21:56:47.509858 jq[1309]: true Jul 14 21:56:47.505913 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 14 21:56:47.509730 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 21:56:47.510034 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 14 21:56:47.511967 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 21:56:47.512241 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 14 21:56:47.527170 jq[1314]: true Jul 14 21:56:47.567670 tar[1313]: linux-arm64/helm Jul 14 21:56:47.595808 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 21:56:47.596086 systemd[1]: Finished motdgen.service. Jul 14 21:56:47.598936 extend-filesystems[1296]: Found loop1 Jul 14 21:56:47.599940 extend-filesystems[1296]: Found vda Jul 14 21:56:47.600530 extend-filesystems[1296]: Found vda1 Jul 14 21:56:47.601236 extend-filesystems[1296]: Found vda2 Jul 14 21:56:47.601909 extend-filesystems[1296]: Found vda3 Jul 14 21:56:47.602578 extend-filesystems[1296]: Found usr Jul 14 21:56:47.603289 extend-filesystems[1296]: Found vda4 Jul 14 21:56:47.603998 extend-filesystems[1296]: Found vda6 Jul 14 21:56:47.604957 extend-filesystems[1296]: Found vda7 Jul 14 21:56:47.605884 extend-filesystems[1296]: Found vda9 Jul 14 21:56:47.605884 extend-filesystems[1296]: Checking size of /dev/vda9 Jul 14 21:56:47.608215 dbus-daemon[1294]: [system] SELinux support is enabled Jul 14 21:56:47.608399 systemd[1]: Started dbus.service. Jul 14 21:56:47.611785 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 21:56:47.611827 systemd[1]: Reached target system-config.target. Jul 14 21:56:47.612628 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 21:56:47.612690 systemd[1]: Reached target user-config.target. Jul 14 21:56:47.619921 bash[1341]: Updated "/home/core/.ssh/authorized_keys" Jul 14 21:56:47.629197 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 14 21:56:47.658608 systemd-logind[1305]: Watching system buttons on /dev/input/event0 (Power Button) Jul 14 21:56:47.659679 systemd-logind[1305]: New seat seat0. Jul 14 21:56:47.663286 systemd[1]: Started systemd-logind.service. Jul 14 21:56:47.683277 extend-filesystems[1296]: Resized partition /dev/vda9 Jul 14 21:56:47.705128 extend-filesystems[1354]: resize2fs 1.46.5 (30-Dec-2021) Jul 14 21:56:47.728626 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 21:56:47.732896 update_engine[1308]: I0714 21:56:47.732545 1308 main.cc:92] Flatcar Update Engine starting Jul 14 21:56:47.736665 systemd[1]: Started update-engine.service. Jul 14 21:56:47.736787 update_engine[1308]: I0714 21:56:47.736658 1308 update_check_scheduler.cc:74] Next update check in 10m44s Jul 14 21:56:47.740265 systemd[1]: Started locksmithd.service. Jul 14 21:56:47.747607 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 21:56:47.764391 extend-filesystems[1354]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 21:56:47.764391 extend-filesystems[1354]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 21:56:47.764391 extend-filesystems[1354]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 21:56:47.768852 extend-filesystems[1296]: Resized filesystem in /dev/vda9 Jul 14 21:56:47.768819 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 21:56:47.769136 systemd[1]: Finished extend-filesystems.service. Jul 14 21:56:47.784285 env[1319]: time="2025-07-14T21:56:47.778969520Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 14 21:56:47.808604 env[1319]: time="2025-07-14T21:56:47.808528840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 21:56:47.808758 env[1319]: time="2025-07-14T21:56:47.808733600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:56:47.810377 env[1319]: time="2025-07-14T21:56:47.810330480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.187-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:56:47.810377 env[1319]: time="2025-07-14T21:56:47.810373520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:56:47.810710 env[1319]: time="2025-07-14T21:56:47.810683360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:56:47.810710 env[1319]: time="2025-07-14T21:56:47.810707840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 21:56:47.810774 env[1319]: time="2025-07-14T21:56:47.810722280Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 14 21:56:47.810774 env[1319]: time="2025-07-14T21:56:47.810732560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 21:56:47.810824 env[1319]: time="2025-07-14T21:56:47.810807680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:56:47.811214 env[1319]: time="2025-07-14T21:56:47.811186880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:56:47.811360 env[1319]: time="2025-07-14T21:56:47.811338000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:56:47.811360 env[1319]: time="2025-07-14T21:56:47.811357360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 21:56:47.811432 env[1319]: time="2025-07-14T21:56:47.811414080Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 14 21:56:47.811432 env[1319]: time="2025-07-14T21:56:47.811433960Z" level=info msg="metadata content store policy set" policy=shared Jul 14 21:56:47.815256 env[1319]: time="2025-07-14T21:56:47.815226560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 21:56:47.815321 env[1319]: time="2025-07-14T21:56:47.815257280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 21:56:47.815321 env[1319]: time="2025-07-14T21:56:47.815271120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 21:56:47.815321 env[1319]: time="2025-07-14T21:56:47.815308040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 21:56:47.815398 env[1319]: time="2025-07-14T21:56:47.815324240Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 21:56:47.815398 env[1319]: time="2025-07-14T21:56:47.815338400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 21:56:47.815398 env[1319]: time="2025-07-14T21:56:47.815351080Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 21:56:47.815735 env[1319]: time="2025-07-14T21:56:47.815708040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 21:56:47.815735 env[1319]: time="2025-07-14T21:56:47.815734800Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 14 21:56:47.815799 env[1319]: time="2025-07-14T21:56:47.815749520Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 21:56:47.815799 env[1319]: time="2025-07-14T21:56:47.815763240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 21:56:47.815799 env[1319]: time="2025-07-14T21:56:47.815776640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 21:56:47.815914 env[1319]: time="2025-07-14T21:56:47.815893520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 21:56:47.815992 env[1319]: time="2025-07-14T21:56:47.815974640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 21:56:47.816306 env[1319]: time="2025-07-14T21:56:47.816282680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 21:56:47.816349 env[1319]: time="2025-07-14T21:56:47.816315120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 21:56:47.816349 env[1319]: time="2025-07-14T21:56:47.816331480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 21:56:47.816611 env[1319]: time="2025-07-14T21:56:47.816574160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 21:56:47.816611 env[1319]: time="2025-07-14T21:56:47.816611000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 21:56:47.816681 env[1319]: time="2025-07-14T21:56:47.816632920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 21:56:47.816681 env[1319]: time="2025-07-14T21:56:47.816644960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 21:56:47.816681 env[1319]: time="2025-07-14T21:56:47.816656600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 21:56:47.816681 env[1319]: time="2025-07-14T21:56:47.816668320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 21:56:47.816681 env[1319]: time="2025-07-14T21:56:47.816679240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 21:56:47.816786 env[1319]: time="2025-07-14T21:56:47.816691080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 21:56:47.816786 env[1319]: time="2025-07-14T21:56:47.816705840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 21:56:47.816834 env[1319]: time="2025-07-14T21:56:47.816822840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 21:56:47.816856 env[1319]: time="2025-07-14T21:56:47.816838680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 21:56:47.816856 env[1319]: time="2025-07-14T21:56:47.816852720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 21:56:47.816914 env[1319]: time="2025-07-14T21:56:47.816865960Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 21:56:47.816914 env[1319]: time="2025-07-14T21:56:47.816893600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 14 21:56:47.816914 env[1319]: time="2025-07-14T21:56:47.816906120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 21:56:47.816982 env[1319]: time="2025-07-14T21:56:47.816922560Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 14 21:56:47.816982 env[1319]: time="2025-07-14T21:56:47.816955360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 21:56:47.817216 env[1319]: time="2025-07-14T21:56:47.817153320Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 21:56:47.817216 env[1319]: time="2025-07-14T21:56:47.817216640Z" level=info msg="Connect containerd service" Jul 14 21:56:47.824200 env[1319]: time="2025-07-14T21:56:47.817251520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 21:56:47.824200 env[1319]: time="2025-07-14T21:56:47.818026040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:56:47.824200 env[1319]: time="2025-07-14T21:56:47.818537400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 21:56:47.824200 env[1319]: time="2025-07-14T21:56:47.818576680Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 21:56:47.824200 env[1319]: time="2025-07-14T21:56:47.818645640Z" level=info msg="containerd successfully booted in 0.041345s" Jul 14 21:56:47.824200 env[1319]: time="2025-07-14T21:56:47.821068480Z" level=info msg="Start subscribing containerd event" Jul 14 21:56:47.824200 env[1319]: time="2025-07-14T21:56:47.821126880Z" level=info msg="Start recovering state" Jul 14 21:56:47.824200 env[1319]: time="2025-07-14T21:56:47.821193280Z" level=info msg="Start event monitor" Jul 14 21:56:47.824200 env[1319]: time="2025-07-14T21:56:47.821211640Z" level=info msg="Start snapshots syncer" Jul 14 21:56:47.824200 env[1319]: time="2025-07-14T21:56:47.821224800Z" level=info msg="Start cni network conf syncer for default" Jul 14 21:56:47.824200 env[1319]: time="2025-07-14T21:56:47.821305480Z" level=info msg="Start streaming server" Jul 14 21:56:47.818786 systemd[1]: Started containerd.service. Jul 14 21:56:47.828244 locksmithd[1355]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 21:56:48.016433 tar[1313]: linux-arm64/LICENSE Jul 14 21:56:48.016656 tar[1313]: linux-arm64/README.md Jul 14 21:56:48.021384 systemd[1]: Finished prepare-helm.service. Jul 14 21:56:48.322797 systemd-networkd[1104]: eth0: Gained IPv6LL Jul 14 21:56:48.324984 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 14 21:56:48.328150 systemd[1]: Reached target network-online.target. Jul 14 21:56:48.332035 systemd[1]: Starting kubelet.service... Jul 14 21:56:48.955770 systemd[1]: Started kubelet.service. Jul 14 21:56:49.400220 kubelet[1380]: E0714 21:56:49.400164 1380 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:56:49.401735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:56:49.401911 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:56:49.407167 sshd_keygen[1327]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 21:56:49.424495 systemd[1]: Finished sshd-keygen.service. Jul 14 21:56:49.426702 systemd[1]: Starting issuegen.service... Jul 14 21:56:49.431250 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 21:56:49.431464 systemd[1]: Finished issuegen.service. Jul 14 21:56:49.433474 systemd[1]: Starting systemd-user-sessions.service... Jul 14 21:56:49.440796 systemd[1]: Finished systemd-user-sessions.service. Jul 14 21:56:49.442937 systemd[1]: Started getty@tty1.service. Jul 14 21:56:49.444786 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 14 21:56:49.445604 systemd[1]: Reached target getty.target. Jul 14 21:56:49.446216 systemd[1]: Reached target multi-user.target. Jul 14 21:56:49.448206 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 14 21:56:49.454521 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 14 21:56:49.454751 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 14 21:56:49.455554 systemd[1]: Startup finished in 45.258s (kernel) + 5.185s (userspace) = 50.443s. Jul 14 21:56:52.046571 systemd[1]: Created slice system-sshd.slice. Jul 14 21:56:52.047741 systemd[1]: Started sshd@0-10.0.0.75:22-10.0.0.1:37130.service. Jul 14 21:56:52.098464 sshd[1406]: Accepted publickey for core from 10.0.0.1 port 37130 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:56:52.100535 sshd[1406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:56:52.111705 systemd[1]: Created slice user-500.slice. Jul 14 21:56:52.112621 systemd[1]: Starting user-runtime-dir@500.service... Jul 14 21:56:52.114469 systemd-logind[1305]: New session 1 of user core. Jul 14 21:56:52.120932 systemd[1]: Finished user-runtime-dir@500.service. Jul 14 21:56:52.122074 systemd[1]: Starting user@500.service... Jul 14 21:56:52.125734 (systemd)[1411]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:56:52.184351 systemd[1411]: Queued start job for default target default.target. Jul 14 21:56:52.184565 systemd[1411]: Reached target paths.target. Jul 14 21:56:52.184587 systemd[1411]: Reached target sockets.target. Jul 14 21:56:52.184599 systemd[1411]: Reached target timers.target. Jul 14 21:56:52.184608 systemd[1411]: Reached target basic.target. Jul 14 21:56:52.184717 systemd[1]: Started user@500.service. Jul 14 21:56:52.185195 systemd[1411]: Reached target default.target. Jul 14 21:56:52.185241 systemd[1411]: Startup finished in 54ms. Jul 14 21:56:52.185489 systemd[1]: Started session-1.scope. Jul 14 21:56:52.233413 systemd[1]: Started sshd@1-10.0.0.75:22-10.0.0.1:37136.service. Jul 14 21:56:52.271464 sshd[1420]: Accepted publickey for core from 10.0.0.1 port 37136 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:56:52.272678 sshd[1420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:56:52.276933 systemd[1]: Started session-2.scope. Jul 14 21:56:52.277123 systemd-logind[1305]: New session 2 of user core. Jul 14 21:56:52.331138 sshd[1420]: pam_unix(sshd:session): session closed for user core Jul 14 21:56:52.332001 systemd[1]: Started sshd@2-10.0.0.75:22-10.0.0.1:37152.service. Jul 14 21:56:52.334521 systemd[1]: sshd@1-10.0.0.75:22-10.0.0.1:37136.service: Deactivated successfully. Jul 14 21:56:52.335223 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 21:56:52.335688 systemd-logind[1305]: Session 2 logged out. Waiting for processes to exit. Jul 14 21:56:52.336319 systemd-logind[1305]: Removed session 2. Jul 14 21:56:52.371133 sshd[1425]: Accepted publickey for core from 10.0.0.1 port 37152 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:56:52.372825 sshd[1425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:56:52.377149 systemd[1]: Started session-3.scope. Jul 14 21:56:52.377193 systemd-logind[1305]: New session 3 of user core. Jul 14 21:56:52.426250 sshd[1425]: pam_unix(sshd:session): session closed for user core Jul 14 21:56:52.428404 systemd[1]: Started sshd@3-10.0.0.75:22-10.0.0.1:37164.service. Jul 14 21:56:52.429610 systemd[1]: sshd@2-10.0.0.75:22-10.0.0.1:37152.service: Deactivated successfully. Jul 14 21:56:52.430776 systemd-logind[1305]: Session 3 logged out. Waiting for processes to exit. Jul 14 21:56:52.430844 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 21:56:52.431511 systemd-logind[1305]: Removed session 3. Jul 14 21:56:52.467470 sshd[1433]: Accepted publickey for core from 10.0.0.1 port 37164 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:56:52.468540 sshd[1433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:56:52.471728 systemd-logind[1305]: New session 4 of user core. Jul 14 21:56:52.472539 systemd[1]: Started session-4.scope. Jul 14 21:56:52.523534 sshd[1433]: pam_unix(sshd:session): session closed for user core Jul 14 21:56:52.525851 systemd[1]: Started sshd@4-10.0.0.75:22-10.0.0.1:37168.service. Jul 14 21:56:52.526349 systemd[1]: sshd@3-10.0.0.75:22-10.0.0.1:37164.service: Deactivated successfully. Jul 14 21:56:52.527303 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 21:56:52.527627 systemd-logind[1305]: Session 4 logged out. Waiting for processes to exit. Jul 14 21:56:52.528369 systemd-logind[1305]: Removed session 4. Jul 14 21:56:52.563372 sshd[1440]: Accepted publickey for core from 10.0.0.1 port 37168 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:56:52.564671 sshd[1440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:56:52.567627 systemd-logind[1305]: New session 5 of user core. Jul 14 21:56:52.568367 systemd[1]: Started session-5.scope. Jul 14 21:56:52.634445 sudo[1446]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 21:56:52.634805 sudo[1446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 14 21:56:52.650732 dbus-daemon[1294]: avc: received setenforce notice (enforcing=1) Jul 14 21:56:52.651537 sudo[1446]: pam_unix(sudo:session): session closed for user root Jul 14 21:56:52.653545 sshd[1440]: pam_unix(sshd:session): session closed for user core Jul 14 21:56:52.655926 systemd[1]: Started sshd@5-10.0.0.75:22-10.0.0.1:50766.service. Jul 14 21:56:52.656968 systemd[1]: sshd@4-10.0.0.75:22-10.0.0.1:37168.service: Deactivated successfully. Jul 14 21:56:52.657957 systemd-logind[1305]: Session 5 logged out. Waiting for processes to exit. Jul 14 21:56:52.658027 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 21:56:52.658720 systemd-logind[1305]: Removed session 5. Jul 14 21:56:52.695434 sshd[1448]: Accepted publickey for core from 10.0.0.1 port 50766 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:56:52.696948 sshd[1448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:56:52.700359 systemd-logind[1305]: New session 6 of user core. Jul 14 21:56:52.701203 systemd[1]: Started session-6.scope. Jul 14 21:56:52.753135 sudo[1455]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 21:56:52.753352 sudo[1455]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 14 21:56:52.755968 sudo[1455]: pam_unix(sudo:session): session closed for user root Jul 14 21:56:52.760180 sudo[1454]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 14 21:56:52.760386 sudo[1454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 14 21:56:52.768402 systemd[1]: Stopping audit-rules.service... Jul 14 21:56:52.769000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 14 21:56:52.769969 auditctl[1458]: No rules Jul 14 21:56:52.770401 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 21:56:52.770621 systemd[1]: Stopped audit-rules.service. Jul 14 21:56:52.772512 kernel: kauditd_printk_skb: 124 callbacks suppressed Jul 14 21:56:52.772575 kernel: audit: type=1305 audit(1752530212.769:157): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 14 21:56:52.772604 kernel: audit: type=1300 audit(1752530212.769:157): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff41b87c0 a2=420 a3=0 items=0 ppid=1 pid=1458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:52.769000 audit[1458]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff41b87c0 a2=420 a3=0 items=0 ppid=1 pid=1458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:52.772014 systemd[1]: Starting audit-rules.service... Jul 14 21:56:52.774311 kernel: audit: type=1327 audit(1752530212.769:157): proctitle=2F7362696E2F617564697463746C002D44 Jul 14 21:56:52.769000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 14 21:56:52.775139 kernel: audit: type=1131 audit(1752530212.770:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:52.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:52.788084 augenrules[1476]: No rules Jul 14 21:56:52.789076 systemd[1]: Finished audit-rules.service. Jul 14 21:56:52.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:52.789880 sudo[1454]: pam_unix(sudo:session): session closed for user root Jul 14 21:56:52.788000 audit[1454]: USER_END pid=1454 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 21:56:52.793838 systemd[1]: Started sshd@6-10.0.0.75:22-10.0.0.1:50778.service. Jul 14 21:56:52.793930 kernel: audit: type=1130 audit(1752530212.787:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:52.793970 kernel: audit: type=1106 audit(1752530212.788:160): pid=1454 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 21:56:52.793988 kernel: audit: type=1104 audit(1752530212.788:161): pid=1454 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 21:56:52.788000 audit[1454]: CRED_DISP pid=1454 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 21:56:52.793889 sshd[1448]: pam_unix(sshd:session): session closed for user core Jul 14 21:56:52.797757 kernel: audit: type=1130 audit(1752530212.792:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.75:22-10.0.0.1:50778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:52.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.75:22-10.0.0.1:50778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:52.796607 systemd[1]: sshd@5-10.0.0.75:22-10.0.0.1:50766.service: Deactivated successfully. Jul 14 21:56:52.797467 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 21:56:52.797577 systemd-logind[1305]: Session 6 logged out. Waiting for processes to exit. Jul 14 21:56:52.799607 kernel: audit: type=1106 audit(1752530212.793:163): pid=1448 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:56:52.793000 audit[1448]: USER_END pid=1448 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:56:52.798500 systemd-logind[1305]: Removed session 6. Jul 14 21:56:52.801446 kernel: audit: type=1104 audit(1752530212.793:164): pid=1448 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:56:52.793000 audit[1448]: CRED_DISP pid=1448 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:56:52.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.75:22-10.0.0.1:50766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:52.832000 audit[1481]: USER_ACCT pid=1481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:56:52.833779 sshd[1481]: Accepted publickey for core from 10.0.0.1 port 50778 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:56:52.833000 audit[1481]: CRED_ACQ pid=1481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:56:52.833000 audit[1481]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffefef23a0 a2=3 a3=1 items=0 ppid=1 pid=1481 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:52.833000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 21:56:52.835058 sshd[1481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:56:52.839091 systemd-logind[1305]: New session 7 of user core. Jul 14 21:56:52.839420 systemd[1]: Started session-7.scope. Jul 14 21:56:52.842000 audit[1481]: USER_START pid=1481 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:56:52.844000 audit[1486]: CRED_ACQ pid=1486 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:56:52.890000 audit[1487]: USER_ACCT pid=1487 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 21:56:52.891567 sudo[1487]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 21:56:52.892000 audit[1487]: CRED_REFR pid=1487 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 21:56:52.893334 sudo[1487]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 14 21:56:52.894000 audit[1487]: USER_START pid=1487 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 21:56:52.950242 systemd[1]: Starting docker.service... Jul 14 21:56:53.032577 env[1499]: time="2025-07-14T21:56:53.032526430Z" level=info msg="Starting up" Jul 14 21:56:53.034075 env[1499]: time="2025-07-14T21:56:53.034052620Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 14 21:56:53.034075 env[1499]: time="2025-07-14T21:56:53.034072997Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 14 21:56:53.034159 env[1499]: time="2025-07-14T21:56:53.034091885Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 14 21:56:53.034159 env[1499]: time="2025-07-14T21:56:53.034102700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 14 21:56:53.036444 env[1499]: time="2025-07-14T21:56:53.036420572Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 14 21:56:53.036524 env[1499]: time="2025-07-14T21:56:53.036510035Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 14 21:56:53.036599 env[1499]: time="2025-07-14T21:56:53.036577043Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 14 21:56:53.036653 env[1499]: time="2025-07-14T21:56:53.036639937Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 14 21:56:53.044036 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport277453953-merged.mount: Deactivated successfully. Jul 14 21:56:53.213352 env[1499]: time="2025-07-14T21:56:53.213263943Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 14 21:56:53.213352 env[1499]: time="2025-07-14T21:56:53.213298623Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 14 21:56:53.213521 env[1499]: time="2025-07-14T21:56:53.213424921Z" level=info msg="Loading containers: start." Jul 14 21:56:53.258000 audit[1533]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.258000 audit[1533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc5b3cb90 a2=0 a3=1 items=0 ppid=1499 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.258000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 14 21:56:53.260000 audit[1535]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.260000 audit[1535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe4675c80 a2=0 a3=1 items=0 ppid=1499 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.260000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 14 21:56:53.262000 audit[1537]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.262000 audit[1537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff6303840 a2=0 a3=1 items=0 ppid=1499 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.262000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 14 21:56:53.265000 audit[1539]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.265000 audit[1539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffab29ec0 a2=0 a3=1 items=0 ppid=1499 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.265000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 14 21:56:53.269000 audit[1541]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.269000 audit[1541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd4298fd0 a2=0 a3=1 items=0 ppid=1499 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.269000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 14 21:56:53.301000 audit[1546]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.301000 audit[1546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffebba7ff0 a2=0 a3=1 items=0 ppid=1499 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.301000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 14 21:56:53.310000 audit[1548]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.310000 audit[1548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd77e4030 a2=0 a3=1 items=0 ppid=1499 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.310000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 14 21:56:53.312000 audit[1550]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.312000 audit[1550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd800b550 a2=0 a3=1 items=0 ppid=1499 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.312000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 14 21:56:53.314000 audit[1552]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.314000 audit[1552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffc8531b00 a2=0 a3=1 items=0 ppid=1499 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.314000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 14 21:56:53.322000 audit[1556]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.322000 audit[1556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc3b5a180 a2=0 a3=1 items=0 ppid=1499 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.322000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 14 21:56:53.338000 audit[1557]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1557 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.338000 audit[1557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffef2bfa20 a2=0 a3=1 items=0 ppid=1499 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.338000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 14 21:56:53.349596 kernel: Initializing XFRM netlink socket Jul 14 21:56:53.381325 env[1499]: time="2025-07-14T21:56:53.381282511Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 14 21:56:53.403000 audit[1565]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.403000 audit[1565]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=fffffb060f10 a2=0 a3=1 items=0 ppid=1499 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.403000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 14 21:56:53.418000 audit[1568]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.418000 audit[1568]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffe94d8af0 a2=0 a3=1 items=0 ppid=1499 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.418000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 14 21:56:53.420000 audit[1571]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.420000 audit[1571]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff5034cb0 a2=0 a3=1 items=0 ppid=1499 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.420000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 14 21:56:53.422000 audit[1573]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.422000 audit[1573]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe6e4a7c0 a2=0 a3=1 items=0 ppid=1499 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.422000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 14 21:56:53.424000 audit[1575]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.424000 audit[1575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffe7524ac0 a2=0 a3=1 items=0 ppid=1499 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.424000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 14 21:56:53.426000 audit[1577]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.426000 audit[1577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffdac59bb0 a2=0 a3=1 items=0 ppid=1499 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.426000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 14 21:56:53.428000 audit[1579]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.428000 audit[1579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=fffff01435d0 a2=0 a3=1 items=0 ppid=1499 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.428000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 14 21:56:53.435000 audit[1582]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.435000 audit[1582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffeae3e410 a2=0 a3=1 items=0 ppid=1499 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.435000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 14 21:56:53.443000 audit[1584]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.443000 audit[1584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffe01aee20 a2=0 a3=1 items=0 ppid=1499 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.443000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 14 21:56:53.445000 audit[1586]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.445000 audit[1586]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=fffffe512810 a2=0 a3=1 items=0 ppid=1499 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.445000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 14 21:56:53.448000 audit[1588]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.448000 audit[1588]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff2d69760 a2=0 a3=1 items=0 ppid=1499 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.448000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 14 21:56:53.449736 systemd-networkd[1104]: docker0: Link UP Jul 14 21:56:53.457000 audit[1592]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.457000 audit[1592]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdd77fe20 a2=0 a3=1 items=0 ppid=1499 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.457000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 14 21:56:53.473000 audit[1593]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:56:53.473000 audit[1593]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffea7c3d0 a2=0 a3=1 items=0 ppid=1499 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:56:53.473000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 14 21:56:53.474476 env[1499]: time="2025-07-14T21:56:53.474446233Z" level=info msg="Loading containers: done." Jul 14 21:56:53.497301 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4256468044-merged.mount: Deactivated successfully. Jul 14 21:56:53.503175 env[1499]: time="2025-07-14T21:56:53.503135807Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 21:56:53.503895 env[1499]: time="2025-07-14T21:56:53.503864242Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 14 21:56:53.504083 env[1499]: time="2025-07-14T21:56:53.504066443Z" level=info msg="Daemon has completed initialization" Jul 14 21:56:53.519753 systemd[1]: Started docker.service. Jul 14 21:56:53.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:53.526423 env[1499]: time="2025-07-14T21:56:53.526375462Z" level=info msg="API listen on /run/docker.sock" Jul 14 21:56:59.581222 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 21:56:59.583321 kernel: kauditd_printk_skb: 84 callbacks suppressed Jul 14 21:56:59.583364 kernel: audit: type=1130 audit(1752530219.579:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:59.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:59.581408 systemd[1]: Stopped kubelet.service. Jul 14 21:56:59.583002 systemd[1]: Starting kubelet.service... Jul 14 21:56:59.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:59.585673 kernel: audit: type=1131 audit(1752530219.579:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:59.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:59.677930 systemd[1]: Started kubelet.service. Jul 14 21:56:59.681632 kernel: audit: type=1130 audit(1752530219.677:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:56:59.713180 kubelet[1636]: E0714 21:56:59.713129 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:56:59.715757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:56:59.715896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:56:59.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 21:56:59.718611 kernel: audit: type=1131 audit(1752530219.714:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 21:57:03.842758 env[1319]: time="2025-07-14T21:57:03.842705665Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Jul 14 21:57:09.831326 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 21:57:09.831494 systemd[1]: Stopped kubelet.service. Jul 14 21:57:09.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:09.832968 systemd[1]: Starting kubelet.service... Jul 14 21:57:09.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:09.838983 kernel: audit: type=1130 audit(1752530229.829:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:09.839062 kernel: audit: type=1131 audit(1752530229.829:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:09.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:09.927257 systemd[1]: Started kubelet.service. Jul 14 21:57:09.929608 kernel: audit: type=1130 audit(1752530229.925:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:09.962119 kubelet[1655]: E0714 21:57:09.962060 1655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:57:09.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 21:57:09.963913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:57:09.964076 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:57:09.966616 kernel: audit: type=1131 audit(1752530229.962:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 21:57:15.117775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2446851277.mount: Deactivated successfully. Jul 14 21:57:16.619477 env[1319]: time="2025-07-14T21:57:16.619427874Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:16.621077 env[1319]: time="2025-07-14T21:57:16.621040752Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:16.622796 env[1319]: time="2025-07-14T21:57:16.622764245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:16.625452 env[1319]: time="2025-07-14T21:57:16.625406071Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:16.625804 env[1319]: time="2025-07-14T21:57:16.625775522Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" Jul 14 21:57:16.628936 env[1319]: time="2025-07-14T21:57:16.628890501Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Jul 14 21:57:18.307537 env[1319]: time="2025-07-14T21:57:18.307486298Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:18.309450 env[1319]: time="2025-07-14T21:57:18.309424377Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:18.311505 env[1319]: time="2025-07-14T21:57:18.311472537Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:18.313591 env[1319]: time="2025-07-14T21:57:18.313552274Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:18.314293 env[1319]: time="2025-07-14T21:57:18.314253328Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" Jul 14 21:57:18.315470 env[1319]: time="2025-07-14T21:57:18.315423642Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Jul 14 21:57:19.918615 env[1319]: time="2025-07-14T21:57:19.918534331Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:19.921238 env[1319]: time="2025-07-14T21:57:19.921191721Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:19.923553 env[1319]: time="2025-07-14T21:57:19.923515313Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:19.925796 env[1319]: time="2025-07-14T21:57:19.925747480Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:19.926627 env[1319]: time="2025-07-14T21:57:19.926592728Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" Jul 14 21:57:19.927079 env[1319]: time="2025-07-14T21:57:19.927047652Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Jul 14 21:57:20.081253 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 14 21:57:20.081422 systemd[1]: Stopped kubelet.service. Jul 14 21:57:20.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:20.082958 systemd[1]: Starting kubelet.service... Jul 14 21:57:20.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:20.086595 kernel: audit: type=1130 audit(1752530240.080:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:20.086667 kernel: audit: type=1131 audit(1752530240.080:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:20.176634 systemd[1]: Started kubelet.service. Jul 14 21:57:20.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:20.179621 kernel: audit: type=1130 audit(1752530240.176:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:20.212255 kubelet[1671]: E0714 21:57:20.212213 1671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:57:20.214660 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:57:20.214796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:57:20.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 21:57:20.217607 kernel: audit: type=1131 audit(1752530240.214:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 21:57:21.098514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount309233326.mount: Deactivated successfully. Jul 14 21:57:21.703692 env[1319]: time="2025-07-14T21:57:21.703639437Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:21.705590 env[1319]: time="2025-07-14T21:57:21.705559264Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:21.707084 env[1319]: time="2025-07-14T21:57:21.707052637Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:21.709710 env[1319]: time="2025-07-14T21:57:21.709450410Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:21.713971 env[1319]: time="2025-07-14T21:57:21.713917844Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" Jul 14 21:57:21.714430 env[1319]: time="2025-07-14T21:57:21.714388788Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 21:57:22.276302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount497245160.mount: Deactivated successfully. Jul 14 21:57:23.192563 env[1319]: time="2025-07-14T21:57:23.192513668Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:23.193891 env[1319]: time="2025-07-14T21:57:23.193862096Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:23.196285 env[1319]: time="2025-07-14T21:57:23.196241049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:23.197526 env[1319]: time="2025-07-14T21:57:23.197496338Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:23.198493 env[1319]: time="2025-07-14T21:57:23.198464771Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 14 21:57:23.199076 env[1319]: time="2025-07-14T21:57:23.199054168Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 21:57:23.700754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4219083069.mount: Deactivated successfully. Jul 14 21:57:23.704311 env[1319]: time="2025-07-14T21:57:23.704276186Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:23.706461 env[1319]: time="2025-07-14T21:57:23.706421292Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:23.707918 env[1319]: time="2025-07-14T21:57:23.707895385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:23.709356 env[1319]: time="2025-07-14T21:57:23.709333351Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:23.710025 env[1319]: time="2025-07-14T21:57:23.710000723Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 14 21:57:23.711126 env[1319]: time="2025-07-14T21:57:23.711090940Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 14 21:57:24.215511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2776011312.mount: Deactivated successfully. Jul 14 21:57:26.433966 env[1319]: time="2025-07-14T21:57:26.433909785Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:26.436382 env[1319]: time="2025-07-14T21:57:26.436346155Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:26.438151 env[1319]: time="2025-07-14T21:57:26.438103331Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:26.440715 env[1319]: time="2025-07-14T21:57:26.440685085Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:26.441545 env[1319]: time="2025-07-14T21:57:26.441513625Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 14 21:57:30.331230 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 14 21:57:30.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:30.331400 systemd[1]: Stopped kubelet.service. Jul 14 21:57:30.332916 systemd[1]: Starting kubelet.service... Jul 14 21:57:30.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:30.335036 kernel: audit: type=1130 audit(1752530250.330:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:30.335121 kernel: audit: type=1131 audit(1752530250.330:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:30.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:30.424958 systemd[1]: Started kubelet.service. Jul 14 21:57:30.427689 kernel: audit: type=1130 audit(1752530250.424:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:30.458129 kubelet[1695]: E0714 21:57:30.458051 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:57:30.460149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:57:30.460291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:57:30.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 21:57:30.462603 kernel: audit: type=1131 audit(1752530250.460:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 21:57:33.026904 update_engine[1308]: I0714 21:57:33.026825 1308 update_attempter.cc:509] Updating boot flags... Jul 14 21:57:40.581242 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 14 21:57:40.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:40.581411 systemd[1]: Stopped kubelet.service. Jul 14 21:57:40.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:40.583227 systemd[1]: Starting kubelet.service... Jul 14 21:57:40.585117 kernel: audit: type=1130 audit(1752530260.580:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:40.585184 kernel: audit: type=1131 audit(1752530260.580:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:40.706818 systemd[1]: Started kubelet.service. Jul 14 21:57:40.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:40.709615 kernel: audit: type=1130 audit(1752530260.706:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:40.751173 kubelet[1742]: E0714 21:57:40.751121 1742 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:57:40.753029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:57:40.753170 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:57:40.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 21:57:40.755618 kernel: audit: type=1131 audit(1752530260.752:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 21:57:41.950270 systemd[1]: Stopped kubelet.service. Jul 14 21:57:41.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:41.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:41.952846 systemd[1]: Starting kubelet.service... Jul 14 21:57:41.954123 kernel: audit: type=1130 audit(1752530261.950:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:41.954169 kernel: audit: type=1131 audit(1752530261.950:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:41.979684 systemd[1]: Reloading. Jul 14 21:57:42.027142 /usr/lib/systemd/system-generators/torcx-generator[1778]: time="2025-07-14T21:57:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 21:57:42.027177 /usr/lib/systemd/system-generators/torcx-generator[1778]: time="2025-07-14T21:57:42Z" level=info msg="torcx already run" Jul 14 21:57:42.183557 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 21:57:42.183576 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 21:57:42.200327 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:57:42.262447 systemd[1]: Started kubelet.service. Jul 14 21:57:42.263696 systemd[1]: Stopping kubelet.service... Jul 14 21:57:42.264454 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:57:42.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:42.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:42.264707 systemd[1]: Stopped kubelet.service. Jul 14 21:57:42.266190 systemd[1]: Starting kubelet.service... Jul 14 21:57:42.270758 kernel: audit: type=1130 audit(1752530262.262:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:42.270818 kernel: audit: type=1131 audit(1752530262.264:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:42.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:42.372948 systemd[1]: Started kubelet.service. Jul 14 21:57:42.375609 kernel: audit: type=1130 audit(1752530262.372:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:42.410245 kubelet[1837]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:57:42.410245 kubelet[1837]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 21:57:42.410245 kubelet[1837]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:57:42.410245 kubelet[1837]: I0714 21:57:42.409905 1837 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:57:43.215739 kubelet[1837]: I0714 21:57:43.215686 1837 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 21:57:43.215739 kubelet[1837]: I0714 21:57:43.215725 1837 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:57:43.215991 kubelet[1837]: I0714 21:57:43.215961 1837 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 21:57:43.248228 kubelet[1837]: E0714 21:57:43.248188 1837 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:57:43.254785 kubelet[1837]: I0714 21:57:43.254736 1837 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:57:43.262101 kubelet[1837]: E0714 21:57:43.262059 1837 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:57:43.262101 kubelet[1837]: I0714 21:57:43.262101 1837 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:57:43.265666 kubelet[1837]: I0714 21:57:43.265638 1837 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:57:43.266850 kubelet[1837]: I0714 21:57:43.266822 1837 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 21:57:43.267110 kubelet[1837]: I0714 21:57:43.267078 1837 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:57:43.267349 kubelet[1837]: I0714 21:57:43.267179 1837 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 14 21:57:43.267478 kubelet[1837]: I0714 21:57:43.267465 1837 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:57:43.267546 kubelet[1837]: I0714 21:57:43.267537 1837 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 21:57:43.267862 kubelet[1837]: I0714 21:57:43.267846 1837 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:57:43.274810 kubelet[1837]: I0714 21:57:43.274781 1837 kubelet.go:408] "Attempting to sync node with API server" Jul 14 21:57:43.274936 kubelet[1837]: I0714 21:57:43.274924 1837 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:57:43.275030 kubelet[1837]: I0714 21:57:43.275012 1837 kubelet.go:314] "Adding apiserver pod source" Jul 14 21:57:43.275111 kubelet[1837]: I0714 21:57:43.275099 1837 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:57:43.288338 kubelet[1837]: I0714 21:57:43.288313 1837 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 14 21:57:43.289215 kubelet[1837]: I0714 21:57:43.289190 1837 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:57:43.289507 kubelet[1837]: W0714 21:57:43.289490 1837 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 21:57:43.290577 kubelet[1837]: I0714 21:57:43.290556 1837 server.go:1274] "Started kubelet" Jul 14 21:57:43.296561 kubelet[1837]: I0714 21:57:43.296526 1837 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:57:43.297492 kubelet[1837]: I0714 21:57:43.297470 1837 server.go:449] "Adding debug handlers to kubelet server" Jul 14 21:57:43.299000 audit[1837]: AVC avc: denied { mac_admin } for pid=1837 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:57:43.301633 kernel: audit: type=1400 audit(1752530263.299:224): avc: denied { mac_admin } for pid=1837 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:57:43.299000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 21:57:43.299000 audit[1837]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400096af30 a1=4000a98438 a2=400096af00 a3=25 items=0 ppid=1 pid=1837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:43.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 14 21:57:43.299000 audit[1837]: AVC avc: denied { mac_admin } for pid=1837 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:57:43.299000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 21:57:43.299000 audit[1837]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a9a400 a1=4000a98450 a2=400096afc0 a3=25 items=0 ppid=1 pid=1837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:43.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 14 21:57:43.301932 kubelet[1837]: I0714 21:57:43.299674 1837 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 14 21:57:43.301932 kubelet[1837]: I0714 21:57:43.299704 1837 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 14 21:57:43.301932 kubelet[1837]: I0714 21:57:43.299758 1837 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:57:43.301000 audit[1850]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1850 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:43.301000 audit[1850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc4cff540 a2=0 a3=1 items=0 ppid=1837 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:43.301000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 14 21:57:43.302000 audit[1851]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1851 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:43.302000 audit[1851]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea6744e0 a2=0 a3=1 items=0 ppid=1837 pid=1851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:43.302000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 14 21:57:43.303931 kubelet[1837]: W0714 21:57:43.303902 1837 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Jul 14 21:57:43.304060 kubelet[1837]: E0714 21:57:43.304025 1837 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:57:43.304372 kubelet[1837]: I0714 21:57:43.304330 1837 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:57:43.305382 kubelet[1837]: I0714 21:57:43.305364 1837 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 21:57:43.305631 kubelet[1837]: E0714 21:57:43.305612 1837 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:57:43.305000 audit[1853]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1853 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:43.305000 audit[1853]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffee11f910 a2=0 a3=1 items=0 ppid=1837 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:43.305000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 14 21:57:43.307036 kubelet[1837]: W0714 21:57:43.306979 1837 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Jul 14 21:57:43.307088 kubelet[1837]: E0714 21:57:43.307048 1837 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:57:43.307125 kubelet[1837]: E0714 21:57:43.307108 1837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="200ms" Jul 14 21:57:43.307726 kubelet[1837]: I0714 21:57:43.307678 1837 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:57:43.307833 kubelet[1837]: I0714 21:57:43.307810 1837 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 21:57:43.307885 kubelet[1837]: I0714 21:57:43.307861 1837 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:57:43.308095 kubelet[1837]: I0714 21:57:43.308075 1837 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:57:43.308568 kubelet[1837]: I0714 21:57:43.308548 1837 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:57:43.307000 audit[1855]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1855 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:43.307000 audit[1855]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff7aab930 a2=0 a3=1 items=0 ppid=1837 pid=1855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:43.307000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 14 21:57:43.308870 kubelet[1837]: I0714 21:57:43.308849 1837 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:57:43.310137 kubelet[1837]: W0714 21:57:43.310101 1837 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Jul 14 21:57:43.310203 kubelet[1837]: E0714 21:57:43.310139 1837 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:57:43.310988 kubelet[1837]: E0714 21:57:43.310963 1837 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:57:43.312246 kubelet[1837]: I0714 21:57:43.312219 1837 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:57:43.312313 kubelet[1837]: E0714 21:57:43.311172 1837 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523cfd398ddf42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 21:57:43.290535746 +0000 UTC m=+0.914375223,LastTimestamp:2025-07-14 21:57:43.290535746 +0000 UTC m=+0.914375223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 21:57:43.318000 audit[1858]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1858 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:43.318000 audit[1858]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffee885a90 a2=0 a3=1 items=0 ppid=1837 pid=1858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:43.318000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 14 21:57:43.319499 kubelet[1837]: I0714 21:57:43.319468 1837 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:57:43.319000 audit[1861]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1861 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:43.319000 audit[1861]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff33aa5c0 a2=0 a3=1 items=0 ppid=1837 pid=1861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:43.319000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 14 21:57:43.322479 kubelet[1837]: I0714 21:57:43.322439 1837 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:57:43.322479 kubelet[1837]: I0714 21:57:43.322471 1837 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 21:57:43.322572 kubelet[1837]: I0714 21:57:43.322493 1837 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 21:57:43.322572 kubelet[1837]: E0714 21:57:43.322538 1837 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:57:43.322000 audit[1862]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1862 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:43.322000 audit[1862]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdec91af0 a2=0 a3=1 items=0 ppid=1837 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:43.322000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 14 21:57:43.323371 kubelet[1837]: W0714 21:57:43.323334 1837 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Jul 14 21:57:43.323418 kubelet[1837]: E0714 21:57:43.323378 1837 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:57:43.323000 audit[1863]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1863 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:43.323000 audit[1863]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc1c78400 a2=0 a3=1 items=0 ppid=1837 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:43.323000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 14 21:57:43.323000 audit[1865]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:43.323000 audit[1865]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa464bb0 a2=0 a3=1 items=0 ppid=1837 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:43.323000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 14 21:57:43.324000 audit[1867]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=1867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:43.324000 audit[1867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe84a44d0 a2=0 a3=1 items=0 ppid=1837 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:43.324000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 14 21:57:43.325000 audit[1868]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1868 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:43.325000 audit[1868]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=fffffa4a5d00 a2=0 a3=1 items=0 ppid=1837 pid=1868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:43.325000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 14 21:57:43.326000 audit[1869]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1869 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:43.326000 audit[1869]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc483e3b0 a2=0 a3=1 items=0 ppid=1837 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:43.326000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 14 21:57:43.331566 kubelet[1837]: I0714 21:57:43.331539 1837 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 21:57:43.331566 kubelet[1837]: I0714 21:57:43.331556 1837 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 21:57:43.331714 kubelet[1837]: I0714 21:57:43.331573 1837 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:57:43.333389 kubelet[1837]: I0714 21:57:43.333360 1837 policy_none.go:49] "None policy: Start" Jul 14 21:57:43.333981 kubelet[1837]: I0714 21:57:43.333967 1837 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 21:57:43.334083 kubelet[1837]: I0714 21:57:43.334070 1837 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:57:43.338920 kubelet[1837]: I0714 21:57:43.338896 1837 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:57:43.338000 audit[1837]: AVC avc: denied { mac_admin } for pid=1837 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:57:43.338000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 21:57:43.338000 audit[1837]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40008cd050 a1=4000b59428 a2=40008cd020 a3=25 items=0 ppid=1 pid=1837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:43.338000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 14 21:57:43.339264 kubelet[1837]: I0714 21:57:43.339242 1837 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 14 21:57:43.339413 kubelet[1837]: I0714 21:57:43.339399 1837 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:57:43.339511 kubelet[1837]: I0714 21:57:43.339480 1837 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:57:43.341054 kubelet[1837]: I0714 21:57:43.341012 1837 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:57:43.341989 kubelet[1837]: E0714 21:57:43.341967 1837 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 21:57:43.440839 kubelet[1837]: I0714 21:57:43.440810 1837 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:57:43.441333 kubelet[1837]: E0714 21:57:43.441294 1837 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Jul 14 21:57:43.507997 kubelet[1837]: E0714 21:57:43.507879 1837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="400ms" Jul 14 21:57:43.510304 kubelet[1837]: I0714 21:57:43.510278 1837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b69fdf8d99932f02c074be77a583644-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5b69fdf8d99932f02c074be77a583644\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:57:43.510785 kubelet[1837]: I0714 21:57:43.510442 1837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:57:43.510951 kubelet[1837]: I0714 21:57:43.510923 1837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:57:43.511092 kubelet[1837]: I0714 21:57:43.511076 1837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:57:43.511197 kubelet[1837]: I0714 21:57:43.511180 1837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:57:43.511278 kubelet[1837]: I0714 21:57:43.511264 1837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:57:43.511368 kubelet[1837]: I0714 21:57:43.511355 1837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b69fdf8d99932f02c074be77a583644-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5b69fdf8d99932f02c074be77a583644\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:57:43.511454 kubelet[1837]: I0714 21:57:43.511440 1837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b69fdf8d99932f02c074be77a583644-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5b69fdf8d99932f02c074be77a583644\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:57:43.511534 kubelet[1837]: I0714 21:57:43.511522 1837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:57:43.642623 kubelet[1837]: I0714 21:57:43.642565 1837 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:57:43.642998 kubelet[1837]: E0714 21:57:43.642969 1837 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Jul 14 21:57:43.728639 kubelet[1837]: E0714 21:57:43.728577 1837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:43.728832 kubelet[1837]: E0714 21:57:43.728812 1837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:43.729503 env[1319]: time="2025-07-14T21:57:43.729235327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5b69fdf8d99932f02c074be77a583644,Namespace:kube-system,Attempt:0,}" Jul 14 21:57:43.729503 env[1319]: time="2025-07-14T21:57:43.729378018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" Jul 14 21:57:43.730115 kubelet[1837]: E0714 21:57:43.730091 1837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:43.730475 env[1319]: time="2025-07-14T21:57:43.730423415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" Jul 14 21:57:43.908781 kubelet[1837]: E0714 21:57:43.908732 1837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="800ms" Jul 14 21:57:44.044991 kubelet[1837]: I0714 21:57:44.044652 1837 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:57:44.044991 kubelet[1837]: E0714 21:57:44.044956 1837 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Jul 14 21:57:44.305072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount187944800.mount: Deactivated successfully. Jul 14 21:57:44.310428 env[1319]: time="2025-07-14T21:57:44.310382858Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:44.311431 env[1319]: time="2025-07-14T21:57:44.311394969Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:44.312380 env[1319]: time="2025-07-14T21:57:44.312351757Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:44.313431 env[1319]: time="2025-07-14T21:57:44.313395831Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:44.314825 env[1319]: time="2025-07-14T21:57:44.314802130Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:44.315449 env[1319]: time="2025-07-14T21:57:44.315426694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:44.317843 env[1319]: time="2025-07-14T21:57:44.317814023Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:44.321127 env[1319]: time="2025-07-14T21:57:44.321090775Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:44.324286 env[1319]: time="2025-07-14T21:57:44.324250478Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:44.325217 env[1319]: time="2025-07-14T21:57:44.325188264Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:44.326358 env[1319]: time="2025-07-14T21:57:44.326330265Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:44.327264 env[1319]: time="2025-07-14T21:57:44.327236609Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:57:44.352216 env[1319]: time="2025-07-14T21:57:44.352148649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:57:44.352216 env[1319]: time="2025-07-14T21:57:44.352189052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:57:44.352216 env[1319]: time="2025-07-14T21:57:44.352199453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:57:44.352521 env[1319]: time="2025-07-14T21:57:44.352471952Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3975b6b30cd497d5422ea610e2e7b0c8b15ca6495e1991af6276c17a112d7979 pid=1890 runtime=io.containerd.runc.v2 Jul 14 21:57:44.353246 env[1319]: time="2025-07-14T21:57:44.353013310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:57:44.353246 env[1319]: time="2025-07-14T21:57:44.353049713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:57:44.353246 env[1319]: time="2025-07-14T21:57:44.353060954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:57:44.353554 env[1319]: time="2025-07-14T21:57:44.353422899Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a47ea821900ffae5b21e360054d6191db6f128476747b103ef2499cc3c2eff69 pid=1898 runtime=io.containerd.runc.v2 Jul 14 21:57:44.353869 env[1319]: time="2025-07-14T21:57:44.353712560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:57:44.353869 env[1319]: time="2025-07-14T21:57:44.353740122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:57:44.353869 env[1319]: time="2025-07-14T21:57:44.353750282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:57:44.354089 env[1319]: time="2025-07-14T21:57:44.354044543Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/786d1e525a387d7b93877b5c9337b6dbfb176085b7b7c123aae4ee4de96d30eb pid=1895 runtime=io.containerd.runc.v2 Jul 14 21:57:44.434596 env[1319]: time="2025-07-14T21:57:44.433833741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"786d1e525a387d7b93877b5c9337b6dbfb176085b7b7c123aae4ee4de96d30eb\"" Jul 14 21:57:44.440694 kubelet[1837]: E0714 21:57:44.440671 1837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:44.442418 env[1319]: time="2025-07-14T21:57:44.442378545Z" level=info msg="CreateContainer within sandbox \"786d1e525a387d7b93877b5c9337b6dbfb176085b7b7c123aae4ee4de96d30eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 21:57:44.442634 env[1319]: time="2025-07-14T21:57:44.442597481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3975b6b30cd497d5422ea610e2e7b0c8b15ca6495e1991af6276c17a112d7979\"" Jul 14 21:57:44.443247 kubelet[1837]: E0714 21:57:44.443224 1837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:44.445304 env[1319]: time="2025-07-14T21:57:44.445268950Z" level=info msg="CreateContainer within sandbox \"3975b6b30cd497d5422ea610e2e7b0c8b15ca6495e1991af6276c17a112d7979\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 21:57:44.453734 env[1319]: time="2025-07-14T21:57:44.453700985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5b69fdf8d99932f02c074be77a583644,Namespace:kube-system,Attempt:0,} returns sandbox id \"a47ea821900ffae5b21e360054d6191db6f128476747b103ef2499cc3c2eff69\"" Jul 14 21:57:44.454291 kubelet[1837]: E0714 21:57:44.454270 1837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:44.455700 env[1319]: time="2025-07-14T21:57:44.455672245Z" level=info msg="CreateContainer within sandbox \"a47ea821900ffae5b21e360054d6191db6f128476747b103ef2499cc3c2eff69\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 21:57:44.456057 env[1319]: time="2025-07-14T21:57:44.456022669Z" level=info msg="CreateContainer within sandbox \"786d1e525a387d7b93877b5c9337b6dbfb176085b7b7c123aae4ee4de96d30eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1665e7c8a13e27a8a23cbbdd6cfd60181929910af0538fabd836c2c2ebccaf87\"" Jul 14 21:57:44.456490 env[1319]: time="2025-07-14T21:57:44.456464341Z" level=info msg="StartContainer for \"1665e7c8a13e27a8a23cbbdd6cfd60181929910af0538fabd836c2c2ebccaf87\"" Jul 14 21:57:44.459619 env[1319]: time="2025-07-14T21:57:44.459579761Z" level=info msg="CreateContainer within sandbox \"3975b6b30cd497d5422ea610e2e7b0c8b15ca6495e1991af6276c17a112d7979\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7113a4443e189c84dceb0130d05628c6f100a175893fe06a2be27b35a49535f1\"" Jul 14 21:57:44.460037 env[1319]: time="2025-07-14T21:57:44.460004031Z" level=info msg="StartContainer for \"7113a4443e189c84dceb0130d05628c6f100a175893fe06a2be27b35a49535f1\"" Jul 14 21:57:44.468858 kubelet[1837]: W0714 21:57:44.468809 1837 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Jul 14 21:57:44.468858 kubelet[1837]: E0714 21:57:44.468853 1837 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:57:44.470074 env[1319]: time="2025-07-14T21:57:44.470036940Z" level=info msg="CreateContainer within sandbox \"a47ea821900ffae5b21e360054d6191db6f128476747b103ef2499cc3c2eff69\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ec698f23f4c4b505937285fb18fe015d5b4cf89449df11d2f78747f85bf7915e\"" Jul 14 21:57:44.470580 env[1319]: time="2025-07-14T21:57:44.470545496Z" level=info msg="StartContainer for \"ec698f23f4c4b505937285fb18fe015d5b4cf89449df11d2f78747f85bf7915e\"" Jul 14 21:57:44.545953 env[1319]: time="2025-07-14T21:57:44.545903661Z" level=info msg="StartContainer for \"1665e7c8a13e27a8a23cbbdd6cfd60181929910af0538fabd836c2c2ebccaf87\" returns successfully" Jul 14 21:57:44.569152 env[1319]: time="2025-07-14T21:57:44.557957313Z" level=info msg="StartContainer for \"7113a4443e189c84dceb0130d05628c6f100a175893fe06a2be27b35a49535f1\" returns successfully" Jul 14 21:57:44.581890 env[1319]: time="2025-07-14T21:57:44.581425531Z" level=info msg="StartContainer for \"ec698f23f4c4b505937285fb18fe015d5b4cf89449df11d2f78747f85bf7915e\" returns successfully" Jul 14 21:57:44.704026 kubelet[1837]: W0714 21:57:44.703961 1837 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Jul 14 21:57:44.704182 kubelet[1837]: E0714 21:57:44.704031 1837 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:57:44.709982 kubelet[1837]: E0714 21:57:44.709948 1837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="1.6s" Jul 14 21:57:44.710102 kubelet[1837]: W0714 21:57:44.710002 1837 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Jul 14 21:57:44.710196 kubelet[1837]: E0714 21:57:44.710179 1837 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:57:44.846870 kubelet[1837]: I0714 21:57:44.846835 1837 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:57:45.329720 kubelet[1837]: E0714 21:57:45.329627 1837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:45.331971 kubelet[1837]: E0714 21:57:45.331946 1837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:45.333690 kubelet[1837]: E0714 21:57:45.333669 1837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:46.335831 kubelet[1837]: E0714 21:57:46.335798 1837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:46.593307 kubelet[1837]: E0714 21:57:46.593192 1837 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 21:57:46.714149 kubelet[1837]: E0714 21:57:46.714060 1837 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18523cfd398ddf42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 21:57:43.290535746 +0000 UTC m=+0.914375223,LastTimestamp:2025-07-14 21:57:43.290535746 +0000 UTC m=+0.914375223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 21:57:46.778374 kubelet[1837]: I0714 21:57:46.778345 1837 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 21:57:46.778528 kubelet[1837]: E0714 21:57:46.778514 1837 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 21:57:46.787736 kubelet[1837]: E0714 21:57:46.787711 1837 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:57:46.888524 kubelet[1837]: E0714 21:57:46.888435 1837 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:57:47.276482 kubelet[1837]: I0714 21:57:47.276341 1837 apiserver.go:52] "Watching apiserver" Jul 14 21:57:47.308725 kubelet[1837]: I0714 21:57:47.308686 1837 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 21:57:48.489332 systemd[1]: Reloading. Jul 14 21:57:48.507669 kubelet[1837]: E0714 21:57:48.507606 1837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:48.531750 /usr/lib/systemd/system-generators/torcx-generator[2140]: time="2025-07-14T21:57:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 21:57:48.531782 /usr/lib/systemd/system-generators/torcx-generator[2140]: time="2025-07-14T21:57:48Z" level=info msg="torcx already run" Jul 14 21:57:48.598335 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 21:57:48.598356 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 21:57:48.615240 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:57:48.681457 systemd[1]: Stopping kubelet.service... Jul 14 21:57:48.704973 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:57:48.705267 systemd[1]: Stopped kubelet.service. Jul 14 21:57:48.707666 kernel: kauditd_printk_skb: 47 callbacks suppressed Jul 14 21:57:48.707734 kernel: audit: type=1131 audit(1752530268.703:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:48.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:48.707101 systemd[1]: Starting kubelet.service... Jul 14 21:57:48.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:48.806609 systemd[1]: Started kubelet.service. Jul 14 21:57:48.811455 kernel: audit: type=1130 audit(1752530268.805:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:57:48.846361 kubelet[2191]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:57:48.846799 kubelet[2191]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 21:57:48.846848 kubelet[2191]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:57:48.846989 kubelet[2191]: I0714 21:57:48.846952 2191 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:57:48.856537 kubelet[2191]: I0714 21:57:48.856493 2191 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 21:57:48.856537 kubelet[2191]: I0714 21:57:48.856519 2191 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:57:48.856760 kubelet[2191]: I0714 21:57:48.856734 2191 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 21:57:48.858005 kubelet[2191]: I0714 21:57:48.857983 2191 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 21:57:48.859784 kubelet[2191]: I0714 21:57:48.859760 2191 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:57:48.866270 kubelet[2191]: E0714 21:57:48.866235 2191 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:57:48.866270 kubelet[2191]: I0714 21:57:48.866264 2191 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:57:48.868721 kubelet[2191]: I0714 21:57:48.868702 2191 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:57:48.869016 kubelet[2191]: I0714 21:57:48.868994 2191 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 21:57:48.869116 kubelet[2191]: I0714 21:57:48.869087 2191 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:57:48.869261 kubelet[2191]: I0714 21:57:48.869112 2191 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 14 21:57:48.869355 kubelet[2191]: I0714 21:57:48.869263 2191 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:57:48.869355 kubelet[2191]: I0714 21:57:48.869273 2191 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 21:57:48.869355 kubelet[2191]: I0714 21:57:48.869302 2191 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:57:48.869436 kubelet[2191]: I0714 21:57:48.869390 2191 kubelet.go:408] "Attempting to sync node with API server" Jul 14 21:57:48.869436 kubelet[2191]: I0714 21:57:48.869403 2191 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:57:48.869436 kubelet[2191]: I0714 21:57:48.869417 2191 kubelet.go:314] "Adding apiserver pod source" Jul 14 21:57:48.869436 kubelet[2191]: I0714 21:57:48.869429 2191 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:57:48.870719 kubelet[2191]: I0714 21:57:48.870631 2191 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 14 21:57:48.871222 kubelet[2191]: I0714 21:57:48.871201 2191 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:57:48.871660 kubelet[2191]: I0714 21:57:48.871638 2191 server.go:1274] "Started kubelet" Jul 14 21:57:48.873537 kubelet[2191]: I0714 21:57:48.873501 2191 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 14 21:57:48.873627 kubelet[2191]: I0714 21:57:48.873610 2191 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 14 21:57:48.873660 kubelet[2191]: I0714 21:57:48.873641 2191 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:57:48.874257 kubelet[2191]: I0714 21:57:48.873899 2191 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:57:48.875083 kubelet[2191]: I0714 21:57:48.875063 2191 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 21:57:48.875175 kubelet[2191]: E0714 21:57:48.875161 2191 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:57:48.875445 kubelet[2191]: I0714 21:57:48.875424 2191 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 21:57:48.876378 kubelet[2191]: I0714 21:57:48.875541 2191 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:57:48.890017 kernel: audit: type=1400 audit(1752530268.871:241): avc: denied { mac_admin } for pid=2191 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:57:48.890077 kernel: audit: type=1401 audit(1752530268.871:241): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 21:57:48.890107 kernel: audit: type=1300 audit(1752530268.871:241): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c64c90 a1=40005d6b28 a2=4000c64c60 a3=25 items=0 ppid=1 pid=2191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:48.871000 audit[2191]: AVC avc: denied { mac_admin } for pid=2191 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:57:48.871000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 21:57:48.871000 audit[2191]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c64c90 a1=40005d6b28 a2=4000c64c60 a3=25 items=0 ppid=1 pid=2191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:48.890249 kubelet[2191]: I0714 21:57:48.878973 2191 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:57:48.890249 kubelet[2191]: I0714 21:57:48.879205 2191 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:57:48.890249 kubelet[2191]: I0714 21:57:48.878634 2191 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:57:48.890249 kubelet[2191]: I0714 21:57:48.879739 2191 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:57:48.890249 kubelet[2191]: I0714 21:57:48.881123 2191 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:57:48.890249 kubelet[2191]: I0714 21:57:48.884146 2191 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:57:48.871000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 14 21:57:48.894182 kernel: audit: type=1327 audit(1752530268.871:241): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 14 21:57:48.871000 audit[2191]: AVC avc: denied { mac_admin } for pid=2191 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:57:48.896075 kernel: audit: type=1400 audit(1752530268.871:242): avc: denied { mac_admin } for pid=2191 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:57:48.871000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 21:57:48.896723 kubelet[2191]: I0714 21:57:48.896681 2191 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:57:48.897146 kernel: audit: type=1401 audit(1752530268.871:242): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 21:57:48.897185 kernel: audit: type=1300 audit(1752530268.871:242): arch=c00000b7 syscall=5 success=no exit=-22 a0=40008ff700 a1=40005d6b40 a2=4000c64d20 a3=25 items=0 ppid=1 pid=2191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:48.871000 audit[2191]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40008ff700 a1=40005d6b40 a2=4000c64d20 a3=25 items=0 ppid=1 pid=2191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:48.899270 kubelet[2191]: I0714 21:57:48.899253 2191 server.go:449] "Adding debug handlers to kubelet server" Jul 14 21:57:48.871000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 14 21:57:48.901814 kubelet[2191]: I0714 21:57:48.901785 2191 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:57:48.901814 kubelet[2191]: I0714 21:57:48.901811 2191 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 21:57:48.901900 kubelet[2191]: I0714 21:57:48.901830 2191 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 21:57:48.901900 kubelet[2191]: E0714 21:57:48.901870 2191 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:57:48.902808 kernel: audit: type=1327 audit(1752530268.871:242): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 14 21:57:48.944561 kubelet[2191]: I0714 21:57:48.944540 2191 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 21:57:48.944708 kubelet[2191]: I0714 21:57:48.944695 2191 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 21:57:48.944787 kubelet[2191]: I0714 21:57:48.944778 2191 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:57:48.944961 kubelet[2191]: I0714 21:57:48.944949 2191 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 21:57:48.945035 kubelet[2191]: I0714 21:57:48.945011 2191 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 21:57:48.945086 kubelet[2191]: I0714 21:57:48.945078 2191 policy_none.go:49] "None policy: Start" Jul 14 21:57:48.945726 kubelet[2191]: I0714 21:57:48.945714 2191 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 21:57:48.945806 kubelet[2191]: I0714 21:57:48.945797 2191 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:57:48.945988 kubelet[2191]: I0714 21:57:48.945978 2191 state_mem.go:75] "Updated machine memory state" Jul 14 21:57:48.947125 kubelet[2191]: I0714 21:57:48.947107 2191 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:57:48.945000 audit[2191]: AVC avc: denied { mac_admin } for pid=2191 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:57:48.945000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 21:57:48.945000 audit[2191]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4001023a10 a1=40005d74a0 a2=40010239e0 a3=25 items=0 ppid=1 pid=2191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:48.945000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 14 21:57:48.947451 kubelet[2191]: I0714 21:57:48.947433 2191 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 14 21:57:48.947683 kubelet[2191]: I0714 21:57:48.947671 2191 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:57:48.947782 kubelet[2191]: I0714 21:57:48.947754 2191 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:57:48.950322 kubelet[2191]: I0714 21:57:48.949199 2191 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:57:49.018886 kubelet[2191]: E0714 21:57:49.018826 2191 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 21:57:49.054964 kubelet[2191]: I0714 21:57:49.054927 2191 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:57:49.064623 kubelet[2191]: I0714 21:57:49.061686 2191 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 14 21:57:49.064848 kubelet[2191]: I0714 21:57:49.064833 2191 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 21:57:49.078412 kubelet[2191]: I0714 21:57:49.078378 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b69fdf8d99932f02c074be77a583644-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5b69fdf8d99932f02c074be77a583644\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:57:49.078599 kubelet[2191]: I0714 21:57:49.078576 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:57:49.078721 kubelet[2191]: I0714 21:57:49.078687 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:57:49.078769 kubelet[2191]: I0714 21:57:49.078728 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:57:49.078769 kubelet[2191]: I0714 21:57:49.078753 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:57:49.078817 kubelet[2191]: I0714 21:57:49.078773 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b69fdf8d99932f02c074be77a583644-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5b69fdf8d99932f02c074be77a583644\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:57:49.078817 kubelet[2191]: I0714 21:57:49.078789 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b69fdf8d99932f02c074be77a583644-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5b69fdf8d99932f02c074be77a583644\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:57:49.078817 kubelet[2191]: I0714 21:57:49.078812 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:57:49.078892 kubelet[2191]: I0714 21:57:49.078827 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:57:49.319986 kubelet[2191]: E0714 21:57:49.319607 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:49.319986 kubelet[2191]: E0714 21:57:49.319615 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:49.320390 kubelet[2191]: E0714 21:57:49.320369 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:49.870268 kubelet[2191]: I0714 21:57:49.870238 2191 apiserver.go:52] "Watching apiserver" Jul 14 21:57:49.875866 kubelet[2191]: I0714 21:57:49.875836 2191 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 21:57:49.921000 kubelet[2191]: E0714 21:57:49.920966 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:49.922606 kubelet[2191]: E0714 21:57:49.922389 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:49.925911 kubelet[2191]: E0714 21:57:49.925876 2191 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 21:57:49.926035 kubelet[2191]: E0714 21:57:49.926018 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:49.938186 kubelet[2191]: I0714 21:57:49.938125 2191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.93808924 podStartE2EDuration="938.08924ms" podCreationTimestamp="2025-07-14 21:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:57:49.937827585 +0000 UTC m=+1.127431607" watchObservedRunningTime="2025-07-14 21:57:49.93808924 +0000 UTC m=+1.127693262" Jul 14 21:57:49.951049 kubelet[2191]: I0714 21:57:49.950975 2191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.950956232 podStartE2EDuration="950.956232ms" podCreationTimestamp="2025-07-14 21:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:57:49.944907998 +0000 UTC m=+1.134512060" watchObservedRunningTime="2025-07-14 21:57:49.950956232 +0000 UTC m=+1.140560254" Jul 14 21:57:49.951203 kubelet[2191]: I0714 21:57:49.951090 2191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.951084039 podStartE2EDuration="1.951084039s" podCreationTimestamp="2025-07-14 21:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:57:49.950819624 +0000 UTC m=+1.140423646" watchObservedRunningTime="2025-07-14 21:57:49.951084039 +0000 UTC m=+1.140688101" Jul 14 21:57:50.921908 kubelet[2191]: E0714 21:57:50.921872 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:55.392833 kubelet[2191]: I0714 21:57:55.392794 2191 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 21:57:55.393647 env[1319]: time="2025-07-14T21:57:55.393611476Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 21:57:55.393927 kubelet[2191]: I0714 21:57:55.393800 2191 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 21:57:56.327023 kubelet[2191]: E0714 21:57:56.326988 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:56.335565 kubelet[2191]: I0714 21:57:56.335527 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6336b40-13cd-4eff-bb25-4d84936b464d-xtables-lock\") pod \"kube-proxy-w6zft\" (UID: \"f6336b40-13cd-4eff-bb25-4d84936b464d\") " pod="kube-system/kube-proxy-w6zft" Jul 14 21:57:56.335565 kubelet[2191]: I0714 21:57:56.335566 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnrxg\" (UniqueName: \"kubernetes.io/projected/f6336b40-13cd-4eff-bb25-4d84936b464d-kube-api-access-jnrxg\") pod \"kube-proxy-w6zft\" (UID: \"f6336b40-13cd-4eff-bb25-4d84936b464d\") " pod="kube-system/kube-proxy-w6zft" Jul 14 21:57:56.335737 kubelet[2191]: I0714 21:57:56.335610 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f6336b40-13cd-4eff-bb25-4d84936b464d-kube-proxy\") pod \"kube-proxy-w6zft\" (UID: \"f6336b40-13cd-4eff-bb25-4d84936b464d\") " pod="kube-system/kube-proxy-w6zft" Jul 14 21:57:56.335737 kubelet[2191]: I0714 21:57:56.335629 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6336b40-13cd-4eff-bb25-4d84936b464d-lib-modules\") pod \"kube-proxy-w6zft\" (UID: \"f6336b40-13cd-4eff-bb25-4d84936b464d\") " pod="kube-system/kube-proxy-w6zft" Jul 14 21:57:56.445404 kubelet[2191]: E0714 21:57:56.445369 2191 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 14 21:57:56.445827 kubelet[2191]: E0714 21:57:56.445810 2191 projected.go:194] Error preparing data for projected volume kube-api-access-jnrxg for pod kube-system/kube-proxy-w6zft: configmap "kube-root-ca.crt" not found Jul 14 21:57:56.445961 kubelet[2191]: E0714 21:57:56.445942 2191 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6336b40-13cd-4eff-bb25-4d84936b464d-kube-api-access-jnrxg podName:f6336b40-13cd-4eff-bb25-4d84936b464d nodeName:}" failed. No retries permitted until 2025-07-14 21:57:56.945921153 +0000 UTC m=+8.135525175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jnrxg" (UniqueName: "kubernetes.io/projected/f6336b40-13cd-4eff-bb25-4d84936b464d-kube-api-access-jnrxg") pod "kube-proxy-w6zft" (UID: "f6336b40-13cd-4eff-bb25-4d84936b464d") : configmap "kube-root-ca.crt" not found Jul 14 21:57:56.488106 kubelet[2191]: E0714 21:57:56.488069 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:56.930437 kubelet[2191]: E0714 21:57:56.929757 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:56.930710 kubelet[2191]: E0714 21:57:56.930685 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:57.027016 kubelet[2191]: E0714 21:57:57.026984 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:57.040438 kubelet[2191]: I0714 21:57:57.040406 2191 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 14 21:57:57.144422 kubelet[2191]: E0714 21:57:57.144354 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:57.145265 env[1319]: time="2025-07-14T21:57:57.145106928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w6zft,Uid:f6336b40-13cd-4eff-bb25-4d84936b464d,Namespace:kube-system,Attempt:0,}" Jul 14 21:57:57.158646 env[1319]: time="2025-07-14T21:57:57.158545698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:57:57.158646 env[1319]: time="2025-07-14T21:57:57.158595381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:57:57.158646 env[1319]: time="2025-07-14T21:57:57.158606461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:57:57.158934 env[1319]: time="2025-07-14T21:57:57.158898594Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/02f3c854ae0266198a3b64340bb5906b6247a81f1f701f69076dcce004a6183f pid=2244 runtime=io.containerd.runc.v2 Jul 14 21:57:57.208000 env[1319]: time="2025-07-14T21:57:57.207516082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w6zft,Uid:f6336b40-13cd-4eff-bb25-4d84936b464d,Namespace:kube-system,Attempt:0,} returns sandbox id \"02f3c854ae0266198a3b64340bb5906b6247a81f1f701f69076dcce004a6183f\"" Jul 14 21:57:57.208756 kubelet[2191]: E0714 21:57:57.208736 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:57.211320 env[1319]: time="2025-07-14T21:57:57.211283053Z" level=info msg="CreateContainer within sandbox \"02f3c854ae0266198a3b64340bb5906b6247a81f1f701f69076dcce004a6183f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 21:57:57.223193 env[1319]: time="2025-07-14T21:57:57.223150272Z" level=info msg="CreateContainer within sandbox \"02f3c854ae0266198a3b64340bb5906b6247a81f1f701f69076dcce004a6183f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"223367726bab239f93cb190dc6f4a9e8b6fdae29bfe0aca5a9ba33d56785b447\"" Jul 14 21:57:57.224079 env[1319]: time="2025-07-14T21:57:57.224048992Z" level=info msg="StartContainer for \"223367726bab239f93cb190dc6f4a9e8b6fdae29bfe0aca5a9ba33d56785b447\"" Jul 14 21:57:57.288369 env[1319]: time="2025-07-14T21:57:57.286231455Z" level=info msg="StartContainer for \"223367726bab239f93cb190dc6f4a9e8b6fdae29bfe0aca5a9ba33d56785b447\" returns successfully" Jul 14 21:57:57.490631 kernel: kauditd_printk_skb: 4 callbacks suppressed Jul 14 21:57:57.490753 kernel: audit: type=1325 audit(1752530277.488:244): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2346 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.490795 kernel: audit: type=1300 audit(1752530277.488:244): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc66b7990 a2=0 a3=1 items=0 ppid=2295 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.488000 audit[2346]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2346 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.488000 audit[2346]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc66b7990 a2=0 a3=1 items=0 ppid=2295 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.488000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 14 21:57:57.495967 kernel: audit: type=1327 audit(1752530277.488:244): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 14 21:57:57.496073 kernel: audit: type=1325 audit(1752530277.489:245): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2345 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.489000 audit[2345]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2345 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.489000 audit[2345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc81f7b50 a2=0 a3=1 items=0 ppid=2295 pid=2345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.500543 kernel: audit: type=1300 audit(1752530277.489:245): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc81f7b50 a2=0 a3=1 items=0 ppid=2295 pid=2345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.500633 kernel: audit: type=1327 audit(1752530277.489:245): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 14 21:57:57.489000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 14 21:57:57.493000 audit[2347]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.503664 kernel: audit: type=1325 audit(1752530277.493:246): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.503702 kernel: audit: type=1300 audit(1752530277.493:246): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6837d60 a2=0 a3=1 items=0 ppid=2295 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.493000 audit[2347]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6837d60 a2=0 a3=1 items=0 ppid=2295 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.493000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 14 21:57:57.508198 kernel: audit: type=1327 audit(1752530277.493:246): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 14 21:57:57.508247 kernel: audit: type=1325 audit(1752530277.494:247): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.494000 audit[2348]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.494000 audit[2348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd4cc32b0 a2=0 a3=1 items=0 ppid=2295 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.494000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 14 21:57:57.495000 audit[2349]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2349 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.495000 audit[2349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff15a1f70 a2=0 a3=1 items=0 ppid=2295 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.495000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 14 21:57:57.496000 audit[2350]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2350 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.496000 audit[2350]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdeafc2b0 a2=0 a3=1 items=0 ppid=2295 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.496000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 14 21:57:57.590000 audit[2351]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.590000 audit[2351]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff5fa8400 a2=0 a3=1 items=0 ppid=2295 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.590000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 14 21:57:57.594000 audit[2353]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2353 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.594000 audit[2353]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe9a61390 a2=0 a3=1 items=0 ppid=2295 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.594000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 14 21:57:57.597000 audit[2356]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2356 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.597000 audit[2356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcac758e0 a2=0 a3=1 items=0 ppid=2295 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.597000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 14 21:57:57.598000 audit[2357]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.598000 audit[2357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef9f4d50 a2=0 a3=1 items=0 ppid=2295 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.598000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 14 21:57:57.601000 audit[2359]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2359 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.601000 audit[2359]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffeeb527b0 a2=0 a3=1 items=0 ppid=2295 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.601000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 14 21:57:57.602000 audit[2360]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.602000 audit[2360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffff94caf0 a2=0 a3=1 items=0 ppid=2295 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.602000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 14 21:57:57.604000 audit[2362]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2362 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.604000 audit[2362]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc651bdc0 a2=0 a3=1 items=0 ppid=2295 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.604000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 14 21:57:57.607000 audit[2365]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2365 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.607000 audit[2365]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc24c5c40 a2=0 a3=1 items=0 ppid=2295 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.607000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 14 21:57:57.609000 audit[2366]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2366 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.609000 audit[2366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc198efc0 a2=0 a3=1 items=0 ppid=2295 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.609000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 14 21:57:57.611000 audit[2368]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2368 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.611000 audit[2368]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd076c9c0 a2=0 a3=1 items=0 ppid=2295 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.611000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 14 21:57:57.612000 audit[2369]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2369 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.612000 audit[2369]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff8134ef0 a2=0 a3=1 items=0 ppid=2295 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.612000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 14 21:57:57.614000 audit[2371]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2371 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.614000 audit[2371]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd61ccdb0 a2=0 a3=1 items=0 ppid=2295 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.614000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 14 21:57:57.618000 audit[2374]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2374 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.618000 audit[2374]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffec734da0 a2=0 a3=1 items=0 ppid=2295 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.618000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 14 21:57:57.621000 audit[2377]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2377 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.621000 audit[2377]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffef80bea0 a2=0 a3=1 items=0 ppid=2295 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.621000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 14 21:57:57.622000 audit[2378]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2378 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.622000 audit[2378]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffffaac3c0 a2=0 a3=1 items=0 ppid=2295 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.622000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 14 21:57:57.624000 audit[2380]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2380 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.624000 audit[2380]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe2b95110 a2=0 a3=1 items=0 ppid=2295 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.624000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 14 21:57:57.628000 audit[2383]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2383 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.628000 audit[2383]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd40686c0 a2=0 a3=1 items=0 ppid=2295 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.628000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 14 21:57:57.629000 audit[2384]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.629000 audit[2384]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffe8fc80 a2=0 a3=1 items=0 ppid=2295 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.629000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 14 21:57:57.631000 audit[2387]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 21:57:57.631000 audit[2387]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffef7d99f0 a2=0 a3=1 items=0 ppid=2295 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.631000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 14 21:57:57.656000 audit[2393]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:57:57.656000 audit[2393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdda5f9b0 a2=0 a3=1 items=0 ppid=2295 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.656000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:57:57.667000 audit[2393]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:57:57.667000 audit[2393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffdda5f9b0 a2=0 a3=1 items=0 ppid=2295 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:57:57.669000 audit[2398]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.669000 audit[2398]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffec251780 a2=0 a3=1 items=0 ppid=2295 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.669000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 14 21:57:57.671000 audit[2400]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2400 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.671000 audit[2400]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffef269290 a2=0 a3=1 items=0 ppid=2295 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.671000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 14 21:57:57.675000 audit[2403]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.675000 audit[2403]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffffd098260 a2=0 a3=1 items=0 ppid=2295 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.675000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 14 21:57:57.676000 audit[2404]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.676000 audit[2404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe346e260 a2=0 a3=1 items=0 ppid=2295 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.676000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 14 21:57:57.679000 audit[2406]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.679000 audit[2406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff5decfd0 a2=0 a3=1 items=0 ppid=2295 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.679000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 14 21:57:57.680000 audit[2407]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.680000 audit[2407]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff952d830 a2=0 a3=1 items=0 ppid=2295 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.680000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 14 21:57:57.682000 audit[2409]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2409 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.682000 audit[2409]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffed3d6ba0 a2=0 a3=1 items=0 ppid=2295 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.682000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 14 21:57:57.685000 audit[2412]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.685000 audit[2412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd93dfa80 a2=0 a3=1 items=0 ppid=2295 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.685000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 14 21:57:57.686000 audit[2413]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.686000 audit[2413]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb9d8ae0 a2=0 a3=1 items=0 ppid=2295 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.686000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 14 21:57:57.689000 audit[2415]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2415 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.689000 audit[2415]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcf180540 a2=0 a3=1 items=0 ppid=2295 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.689000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 14 21:57:57.690000 audit[2416]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.690000 audit[2416]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe758a880 a2=0 a3=1 items=0 ppid=2295 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.690000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 14 21:57:57.692000 audit[2418]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2418 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.692000 audit[2418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc1fd7e40 a2=0 a3=1 items=0 ppid=2295 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.692000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 14 21:57:57.695000 audit[2421]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2421 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.695000 audit[2421]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe00ccbf0 a2=0 a3=1 items=0 ppid=2295 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.695000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 14 21:57:57.698000 audit[2424]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2424 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.698000 audit[2424]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd050dc70 a2=0 a3=1 items=0 ppid=2295 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.698000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 14 21:57:57.699000 audit[2425]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.699000 audit[2425]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc7b64580 a2=0 a3=1 items=0 ppid=2295 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.699000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 14 21:57:57.702000 audit[2427]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2427 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.702000 audit[2427]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffef3f7290 a2=0 a3=1 items=0 ppid=2295 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.702000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 14 21:57:57.705000 audit[2430]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.705000 audit[2430]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffff3d9de30 a2=0 a3=1 items=0 ppid=2295 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.705000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 14 21:57:57.706000 audit[2431]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.706000 audit[2431]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb73ea30 a2=0 a3=1 items=0 ppid=2295 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.706000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 14 21:57:57.709000 audit[2433]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2433 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.709000 audit[2433]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe9f0a3e0 a2=0 a3=1 items=0 ppid=2295 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.709000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 14 21:57:57.710000 audit[2434]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2434 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.710000 audit[2434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffebaf22b0 a2=0 a3=1 items=0 ppid=2295 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.710000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 14 21:57:57.712000 audit[2436]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2436 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.712000 audit[2436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe659b9a0 a2=0 a3=1 items=0 ppid=2295 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.712000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 14 21:57:57.715000 audit[2439]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2439 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 21:57:57.715000 audit[2439]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdc3c4d20 a2=0 a3=1 items=0 ppid=2295 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.715000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 14 21:57:57.719000 audit[2441]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2441 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 14 21:57:57.719000 audit[2441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd9cbe4f0 a2=0 a3=1 items=0 ppid=2295 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.719000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:57:57.720000 audit[2441]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 14 21:57:57.720000 audit[2441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd9cbe4f0 a2=0 a3=1 items=0 ppid=2295 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:57:57.720000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:57:57.933386 kubelet[2191]: E0714 21:57:57.933351 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:57.934077 kubelet[2191]: E0714 21:57:57.933404 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:57:57.951201 kubelet[2191]: I0714 21:57:57.951137 2191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w6zft" podStartSLOduration=1.951117642 podStartE2EDuration="1.951117642s" podCreationTimestamp="2025-07-14 21:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:57:57.951098361 +0000 UTC m=+9.140702343" watchObservedRunningTime="2025-07-14 21:57:57.951117642 +0000 UTC m=+9.140726144" Jul 14 21:58:00.260455 kubelet[2191]: I0714 21:58:00.260402 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t92v5\" (UniqueName: \"kubernetes.io/projected/2eda1bee-6bd1-4b30-b75e-d2a77005730f-kube-api-access-t92v5\") pod \"tigera-operator-5bf8dfcb4-wjdwh\" (UID: \"2eda1bee-6bd1-4b30-b75e-d2a77005730f\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-wjdwh" Jul 14 21:58:00.260941 kubelet[2191]: I0714 21:58:00.260913 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2eda1bee-6bd1-4b30-b75e-d2a77005730f-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-wjdwh\" (UID: \"2eda1bee-6bd1-4b30-b75e-d2a77005730f\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-wjdwh" Jul 14 21:58:00.471361 env[1319]: time="2025-07-14T21:58:00.471311661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-wjdwh,Uid:2eda1bee-6bd1-4b30-b75e-d2a77005730f,Namespace:tigera-operator,Attempt:0,}" Jul 14 21:58:00.485178 env[1319]: time="2025-07-14T21:58:00.485118961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:58:00.485277 env[1319]: time="2025-07-14T21:58:00.485194324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:58:00.485277 env[1319]: time="2025-07-14T21:58:00.485221485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:58:00.485406 env[1319]: time="2025-07-14T21:58:00.485354451Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/80595fbbfa89a27effcfe428332f0d0acfe90e2dc4858cbf05886330616d7d5a pid=2458 runtime=io.containerd.runc.v2 Jul 14 21:58:00.499675 systemd[1]: run-containerd-runc-k8s.io-80595fbbfa89a27effcfe428332f0d0acfe90e2dc4858cbf05886330616d7d5a-runc.rElnt8.mount: Deactivated successfully. Jul 14 21:58:00.537870 env[1319]: time="2025-07-14T21:58:00.537515483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-wjdwh,Uid:2eda1bee-6bd1-4b30-b75e-d2a77005730f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"80595fbbfa89a27effcfe428332f0d0acfe90e2dc4858cbf05886330616d7d5a\"" Jul 14 21:58:00.540779 env[1319]: time="2025-07-14T21:58:00.540740218Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 14 21:58:01.847999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586103826.mount: Deactivated successfully. Jul 14 21:58:02.749694 env[1319]: time="2025-07-14T21:58:02.749652244Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:02.750869 env[1319]: time="2025-07-14T21:58:02.750841971Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:02.752271 env[1319]: time="2025-07-14T21:58:02.752235827Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:02.753572 env[1319]: time="2025-07-14T21:58:02.753547920Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:02.754315 env[1319]: time="2025-07-14T21:58:02.754254508Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 14 21:58:02.757525 env[1319]: time="2025-07-14T21:58:02.757493758Z" level=info msg="CreateContainer within sandbox \"80595fbbfa89a27effcfe428332f0d0acfe90e2dc4858cbf05886330616d7d5a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 14 21:58:02.767402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount887952578.mount: Deactivated successfully. Jul 14 21:58:02.768389 env[1319]: time="2025-07-14T21:58:02.768352513Z" level=info msg="CreateContainer within sandbox \"80595fbbfa89a27effcfe428332f0d0acfe90e2dc4858cbf05886330616d7d5a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f7192958fd5c9cc7beebeedf93874da11c93827b38bdc906dd24d60d384b7c9c\"" Jul 14 21:58:02.768912 env[1319]: time="2025-07-14T21:58:02.768883415Z" level=info msg="StartContainer for \"f7192958fd5c9cc7beebeedf93874da11c93827b38bdc906dd24d60d384b7c9c\"" Jul 14 21:58:02.825620 env[1319]: time="2025-07-14T21:58:02.823279396Z" level=info msg="StartContainer for \"f7192958fd5c9cc7beebeedf93874da11c93827b38bdc906dd24d60d384b7c9c\" returns successfully" Jul 14 21:58:08.531304 sudo[1487]: pam_unix(sudo:session): session closed for user root Jul 14 21:58:08.533528 kernel: kauditd_printk_skb: 143 callbacks suppressed Jul 14 21:58:08.533577 kernel: audit: type=1106 audit(1752530288.530:295): pid=1487 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 21:58:08.530000 audit[1487]: USER_END pid=1487 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 21:58:08.530000 audit[1487]: CRED_DISP pid=1487 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 21:58:08.536188 kernel: audit: type=1104 audit(1752530288.530:296): pid=1487 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 21:58:08.553757 sshd[1481]: pam_unix(sshd:session): session closed for user core Jul 14 21:58:08.560000 audit[1481]: USER_END pid=1481 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:58:08.560000 audit[1481]: CRED_DISP pid=1481 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:58:08.564890 systemd-logind[1305]: Session 7 logged out. Waiting for processes to exit. Jul 14 21:58:08.565088 systemd[1]: sshd@6-10.0.0.75:22-10.0.0.1:50778.service: Deactivated successfully. Jul 14 21:58:08.565878 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 21:58:08.566266 systemd-logind[1305]: Removed session 7. Jul 14 21:58:08.567766 kernel: audit: type=1106 audit(1752530288.560:297): pid=1481 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:58:08.567844 kernel: audit: type=1104 audit(1752530288.560:298): pid=1481 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:58:08.567865 kernel: audit: type=1131 audit(1752530288.563:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.75:22-10.0.0.1:50778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:58:08.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.75:22-10.0.0.1:50778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:58:09.959000 audit[2587]: NETFILTER_CFG table=filter:89 family=2 entries=14 op=nft_register_rule pid=2587 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:09.959000 audit[2587]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff7e92050 a2=0 a3=1 items=0 ppid=2295 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:09.965259 kernel: audit: type=1325 audit(1752530289.959:300): table=filter:89 family=2 entries=14 op=nft_register_rule pid=2587 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:09.965328 kernel: audit: type=1300 audit(1752530289.959:300): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff7e92050 a2=0 a3=1 items=0 ppid=2295 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:09.959000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:09.966772 kernel: audit: type=1327 audit(1752530289.959:300): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:09.967000 audit[2587]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2587 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:09.967000 audit[2587]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff7e92050 a2=0 a3=1 items=0 ppid=2295 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:09.973509 kernel: audit: type=1325 audit(1752530289.967:301): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2587 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:09.973556 kernel: audit: type=1300 audit(1752530289.967:301): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff7e92050 a2=0 a3=1 items=0 ppid=2295 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:09.967000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:09.984000 audit[2589]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=2589 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:09.984000 audit[2589]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe5625b60 a2=0 a3=1 items=0 ppid=2295 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:09.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:09.989000 audit[2589]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2589 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:09.989000 audit[2589]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe5625b60 a2=0 a3=1 items=0 ppid=2295 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:09.989000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:12.963000 audit[2591]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2591 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:12.963000 audit[2591]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff59eceb0 a2=0 a3=1 items=0 ppid=2295 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:12.963000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:12.967000 audit[2591]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2591 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:12.967000 audit[2591]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff59eceb0 a2=0 a3=1 items=0 ppid=2295 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:12.967000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:12.981000 audit[2593]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2593 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:12.981000 audit[2593]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd0cc4c10 a2=0 a3=1 items=0 ppid=2295 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:12.981000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:12.987000 audit[2593]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2593 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:12.987000 audit[2593]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd0cc4c10 a2=0 a3=1 items=0 ppid=2295 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:12.987000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:13.028849 kubelet[2191]: I0714 21:58:13.028726 2191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-wjdwh" podStartSLOduration=10.812318099 podStartE2EDuration="13.028708828s" podCreationTimestamp="2025-07-14 21:58:00 +0000 UTC" firstStartedPulling="2025-07-14 21:58:00.538689732 +0000 UTC m=+11.728293754" lastFinishedPulling="2025-07-14 21:58:02.755080461 +0000 UTC m=+13.944684483" observedRunningTime="2025-07-14 21:58:02.951033639 +0000 UTC m=+14.140637661" watchObservedRunningTime="2025-07-14 21:58:13.028708828 +0000 UTC m=+24.218312850" Jul 14 21:58:13.048820 kubelet[2191]: I0714 21:58:13.048767 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9c0a260-813a-4615-b17a-fb97f65ea1c3-tigera-ca-bundle\") pod \"calico-typha-7789f8f558-jwnkk\" (UID: \"e9c0a260-813a-4615-b17a-fb97f65ea1c3\") " pod="calico-system/calico-typha-7789f8f558-jwnkk" Jul 14 21:58:13.048820 kubelet[2191]: I0714 21:58:13.048822 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e9c0a260-813a-4615-b17a-fb97f65ea1c3-typha-certs\") pod \"calico-typha-7789f8f558-jwnkk\" (UID: \"e9c0a260-813a-4615-b17a-fb97f65ea1c3\") " pod="calico-system/calico-typha-7789f8f558-jwnkk" Jul 14 21:58:13.049005 kubelet[2191]: I0714 21:58:13.048844 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv6tg\" (UniqueName: \"kubernetes.io/projected/e9c0a260-813a-4615-b17a-fb97f65ea1c3-kube-api-access-pv6tg\") pod \"calico-typha-7789f8f558-jwnkk\" (UID: \"e9c0a260-813a-4615-b17a-fb97f65ea1c3\") " pod="calico-system/calico-typha-7789f8f558-jwnkk" Jul 14 21:58:13.346392 kubelet[2191]: E0714 21:58:13.346351 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:58:13.347045 env[1319]: time="2025-07-14T21:58:13.346997218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7789f8f558-jwnkk,Uid:e9c0a260-813a-4615-b17a-fb97f65ea1c3,Namespace:calico-system,Attempt:0,}" Jul 14 21:58:13.361635 env[1319]: time="2025-07-14T21:58:13.361555739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:58:13.361785 env[1319]: time="2025-07-14T21:58:13.361620141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:58:13.361785 env[1319]: time="2025-07-14T21:58:13.361633701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:58:13.361964 env[1319]: time="2025-07-14T21:58:13.361935991Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/049bbf373150cf0e481218accef77b02e2484b4c1a0e9d73ced039b156c92dcd pid=2604 runtime=io.containerd.runc.v2 Jul 14 21:58:13.449676 env[1319]: time="2025-07-14T21:58:13.449618166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7789f8f558-jwnkk,Uid:e9c0a260-813a-4615-b17a-fb97f65ea1c3,Namespace:calico-system,Attempt:0,} returns sandbox id \"049bbf373150cf0e481218accef77b02e2484b4c1a0e9d73ced039b156c92dcd\"" Jul 14 21:58:13.450447 kubelet[2191]: E0714 21:58:13.450423 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:58:13.450762 kubelet[2191]: I0714 21:58:13.450733 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aa01c7c2-660c-4425-a66a-c4bbcabee50a-cni-bin-dir\") pod \"calico-node-6xt6k\" (UID: \"aa01c7c2-660c-4425-a66a-c4bbcabee50a\") " pod="calico-system/calico-node-6xt6k" Jul 14 21:58:13.450810 kubelet[2191]: I0714 21:58:13.450775 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa01c7c2-660c-4425-a66a-c4bbcabee50a-lib-modules\") pod \"calico-node-6xt6k\" (UID: \"aa01c7c2-660c-4425-a66a-c4bbcabee50a\") " pod="calico-system/calico-node-6xt6k" Jul 14 21:58:13.450810 kubelet[2191]: I0714 21:58:13.450796 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aa01c7c2-660c-4425-a66a-c4bbcabee50a-var-lib-calico\") pod \"calico-node-6xt6k\" (UID: \"aa01c7c2-660c-4425-a66a-c4bbcabee50a\") " pod="calico-system/calico-node-6xt6k" Jul 14 21:58:13.450875 kubelet[2191]: I0714 21:58:13.450812 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aa01c7c2-660c-4425-a66a-c4bbcabee50a-cni-log-dir\") pod \"calico-node-6xt6k\" (UID: \"aa01c7c2-660c-4425-a66a-c4bbcabee50a\") " pod="calico-system/calico-node-6xt6k" Jul 14 21:58:13.450875 kubelet[2191]: I0714 21:58:13.450829 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aa01c7c2-660c-4425-a66a-c4bbcabee50a-cni-net-dir\") pod \"calico-node-6xt6k\" (UID: \"aa01c7c2-660c-4425-a66a-c4bbcabee50a\") " pod="calico-system/calico-node-6xt6k" Jul 14 21:58:13.450875 kubelet[2191]: I0714 21:58:13.450846 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aa01c7c2-660c-4425-a66a-c4bbcabee50a-flexvol-driver-host\") pod \"calico-node-6xt6k\" (UID: \"aa01c7c2-660c-4425-a66a-c4bbcabee50a\") " pod="calico-system/calico-node-6xt6k" Jul 14 21:58:13.450875 kubelet[2191]: I0714 21:58:13.450862 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa01c7c2-660c-4425-a66a-c4bbcabee50a-xtables-lock\") pod \"calico-node-6xt6k\" (UID: \"aa01c7c2-660c-4425-a66a-c4bbcabee50a\") " pod="calico-system/calico-node-6xt6k" Jul 14 21:58:13.450980 kubelet[2191]: I0714 21:58:13.450878 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg99s\" (UniqueName: \"kubernetes.io/projected/aa01c7c2-660c-4425-a66a-c4bbcabee50a-kube-api-access-zg99s\") pod \"calico-node-6xt6k\" (UID: \"aa01c7c2-660c-4425-a66a-c4bbcabee50a\") " pod="calico-system/calico-node-6xt6k" Jul 14 21:58:13.450980 kubelet[2191]: I0714 21:58:13.450896 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa01c7c2-660c-4425-a66a-c4bbcabee50a-tigera-ca-bundle\") pod \"calico-node-6xt6k\" (UID: \"aa01c7c2-660c-4425-a66a-c4bbcabee50a\") " pod="calico-system/calico-node-6xt6k" Jul 14 21:58:13.450980 kubelet[2191]: I0714 21:58:13.450912 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aa01c7c2-660c-4425-a66a-c4bbcabee50a-var-run-calico\") pod \"calico-node-6xt6k\" (UID: \"aa01c7c2-660c-4425-a66a-c4bbcabee50a\") " pod="calico-system/calico-node-6xt6k" Jul 14 21:58:13.450980 kubelet[2191]: I0714 21:58:13.450953 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aa01c7c2-660c-4425-a66a-c4bbcabee50a-policysync\") pod \"calico-node-6xt6k\" (UID: \"aa01c7c2-660c-4425-a66a-c4bbcabee50a\") " pod="calico-system/calico-node-6xt6k" Jul 14 21:58:13.451079 kubelet[2191]: I0714 21:58:13.451032 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aa01c7c2-660c-4425-a66a-c4bbcabee50a-node-certs\") pod \"calico-node-6xt6k\" (UID: \"aa01c7c2-660c-4425-a66a-c4bbcabee50a\") " pod="calico-system/calico-node-6xt6k" Jul 14 21:58:13.452085 env[1319]: time="2025-07-14T21:58:13.452050407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 14 21:58:13.554120 kubelet[2191]: E0714 21:58:13.553978 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.554120 kubelet[2191]: W0714 21:58:13.554000 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.554120 kubelet[2191]: E0714 21:58:13.554030 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.554473 kubelet[2191]: E0714 21:58:13.554358 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.554473 kubelet[2191]: W0714 21:58:13.554381 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.554473 kubelet[2191]: E0714 21:58:13.554400 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.554787 kubelet[2191]: E0714 21:58:13.554644 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.554787 kubelet[2191]: W0714 21:58:13.554656 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.554787 kubelet[2191]: E0714 21:58:13.554675 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.555088 kubelet[2191]: E0714 21:58:13.554970 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.555088 kubelet[2191]: W0714 21:58:13.554984 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.555088 kubelet[2191]: E0714 21:58:13.555000 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.555353 kubelet[2191]: E0714 21:58:13.555245 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.555353 kubelet[2191]: W0714 21:58:13.555258 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.555353 kubelet[2191]: E0714 21:58:13.555277 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.555636 kubelet[2191]: E0714 21:58:13.555528 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.555636 kubelet[2191]: W0714 21:58:13.555541 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.555636 kubelet[2191]: E0714 21:58:13.555572 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.555886 kubelet[2191]: E0714 21:58:13.555796 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.555886 kubelet[2191]: W0714 21:58:13.555807 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.555886 kubelet[2191]: E0714 21:58:13.555868 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.556227 kubelet[2191]: E0714 21:58:13.556123 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.556227 kubelet[2191]: W0714 21:58:13.556137 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.556227 kubelet[2191]: E0714 21:58:13.556196 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.556533 kubelet[2191]: E0714 21:58:13.556397 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.556533 kubelet[2191]: W0714 21:58:13.556409 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.556533 kubelet[2191]: E0714 21:58:13.556420 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.556772 kubelet[2191]: E0714 21:58:13.556758 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.556841 kubelet[2191]: W0714 21:58:13.556829 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.556908 kubelet[2191]: E0714 21:58:13.556896 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.557177 kubelet[2191]: E0714 21:58:13.557162 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.557267 kubelet[2191]: W0714 21:58:13.557253 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.557352 kubelet[2191]: E0714 21:58:13.557339 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.557721 kubelet[2191]: E0714 21:58:13.557687 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.557721 kubelet[2191]: W0714 21:58:13.557707 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.557721 kubelet[2191]: E0714 21:58:13.557724 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.557930 kubelet[2191]: E0714 21:58:13.557901 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.557930 kubelet[2191]: W0714 21:58:13.557915 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.558001 kubelet[2191]: E0714 21:58:13.557988 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.558675 kubelet[2191]: E0714 21:58:13.558657 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.558762 kubelet[2191]: W0714 21:58:13.558748 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.558837 kubelet[2191]: E0714 21:58:13.558824 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.559093 kubelet[2191]: E0714 21:58:13.559079 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.559172 kubelet[2191]: W0714 21:58:13.559158 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.559241 kubelet[2191]: E0714 21:58:13.559229 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.559520 kubelet[2191]: E0714 21:58:13.559506 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.559663 kubelet[2191]: W0714 21:58:13.559649 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.559818 kubelet[2191]: E0714 21:58:13.559791 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.560006 kubelet[2191]: E0714 21:58:13.559993 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.560078 kubelet[2191]: W0714 21:58:13.560065 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.560140 kubelet[2191]: E0714 21:58:13.560128 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.564525 kubelet[2191]: E0714 21:58:13.564506 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.564645 kubelet[2191]: W0714 21:58:13.564629 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.564709 kubelet[2191]: E0714 21:58:13.564696 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.661214 kubelet[2191]: E0714 21:58:13.660950 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vscj" podUID="b453fdfd-5b94-4411-a498-a6ed452275d0" Jul 14 21:58:13.682448 env[1319]: time="2025-07-14T21:58:13.682403773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6xt6k,Uid:aa01c7c2-660c-4425-a66a-c4bbcabee50a,Namespace:calico-system,Attempt:0,}" Jul 14 21:58:13.695409 env[1319]: time="2025-07-14T21:58:13.695333680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:58:13.695409 env[1319]: time="2025-07-14T21:58:13.695383362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:58:13.695409 env[1319]: time="2025-07-14T21:58:13.695394242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:58:13.695612 env[1319]: time="2025-07-14T21:58:13.695543727Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/169bc2b5aa7ddc7f2f11cc42844aa9ac43dee5b3c12e543cc8f046337470b5cc pid=2675 runtime=io.containerd.runc.v2 Jul 14 21:58:13.736734 kubelet[2191]: E0714 21:58:13.736702 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.736734 kubelet[2191]: W0714 21:58:13.736722 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.736734 kubelet[2191]: E0714 21:58:13.736741 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.736925 kubelet[2191]: E0714 21:58:13.736906 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.736925 kubelet[2191]: W0714 21:58:13.736916 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.736925 kubelet[2191]: E0714 21:58:13.736924 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.737136 kubelet[2191]: E0714 21:58:13.737121 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.737136 kubelet[2191]: W0714 21:58:13.737135 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.737220 kubelet[2191]: E0714 21:58:13.737145 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.737431 kubelet[2191]: E0714 21:58:13.737413 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.737431 kubelet[2191]: W0714 21:58:13.737427 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.737502 kubelet[2191]: E0714 21:58:13.737438 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.737643 kubelet[2191]: E0714 21:58:13.737625 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.737643 kubelet[2191]: W0714 21:58:13.737638 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.737722 kubelet[2191]: E0714 21:58:13.737649 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.737850 kubelet[2191]: E0714 21:58:13.737821 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.737850 kubelet[2191]: W0714 21:58:13.737838 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.737850 kubelet[2191]: E0714 21:58:13.737847 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.738009 kubelet[2191]: E0714 21:58:13.737993 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.738009 kubelet[2191]: W0714 21:58:13.738004 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.738009 kubelet[2191]: E0714 21:58:13.738012 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.738167 kubelet[2191]: E0714 21:58:13.738152 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.738167 kubelet[2191]: W0714 21:58:13.738165 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.738167 kubelet[2191]: E0714 21:58:13.738173 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.738486 env[1319]: time="2025-07-14T21:58:13.738123613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6xt6k,Uid:aa01c7c2-660c-4425-a66a-c4bbcabee50a,Namespace:calico-system,Attempt:0,} returns sandbox id \"169bc2b5aa7ddc7f2f11cc42844aa9ac43dee5b3c12e543cc8f046337470b5cc\"" Jul 14 21:58:13.738661 kubelet[2191]: E0714 21:58:13.738644 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.738698 kubelet[2191]: W0714 21:58:13.738662 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.738698 kubelet[2191]: E0714 21:58:13.738673 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.738837 kubelet[2191]: E0714 21:58:13.738825 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.738837 kubelet[2191]: W0714 21:58:13.738836 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.738900 kubelet[2191]: E0714 21:58:13.738844 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.739011 kubelet[2191]: E0714 21:58:13.739000 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.739011 kubelet[2191]: W0714 21:58:13.739010 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.739070 kubelet[2191]: E0714 21:58:13.739018 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.739184 kubelet[2191]: E0714 21:58:13.739170 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.739184 kubelet[2191]: W0714 21:58:13.739181 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.739250 kubelet[2191]: E0714 21:58:13.739189 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.739428 kubelet[2191]: E0714 21:58:13.739417 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.739428 kubelet[2191]: W0714 21:58:13.739427 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.739492 kubelet[2191]: E0714 21:58:13.739435 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.739601 kubelet[2191]: E0714 21:58:13.739577 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.739601 kubelet[2191]: W0714 21:58:13.739601 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.739676 kubelet[2191]: E0714 21:58:13.739609 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.740304 kubelet[2191]: E0714 21:58:13.740284 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.740304 kubelet[2191]: W0714 21:58:13.740294 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.740304 kubelet[2191]: E0714 21:58:13.740303 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.740942 kubelet[2191]: E0714 21:58:13.740497 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.740942 kubelet[2191]: W0714 21:58:13.740508 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.740942 kubelet[2191]: E0714 21:58:13.740515 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.742045 kubelet[2191]: E0714 21:58:13.742022 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.742045 kubelet[2191]: W0714 21:58:13.742039 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.742045 kubelet[2191]: E0714 21:58:13.742052 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.742242 kubelet[2191]: E0714 21:58:13.742208 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.742242 kubelet[2191]: W0714 21:58:13.742219 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.742242 kubelet[2191]: E0714 21:58:13.742227 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.742891 kubelet[2191]: E0714 21:58:13.742429 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.742891 kubelet[2191]: W0714 21:58:13.742441 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.742891 kubelet[2191]: E0714 21:58:13.742450 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.742891 kubelet[2191]: E0714 21:58:13.742693 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.742891 kubelet[2191]: W0714 21:58:13.742702 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.742891 kubelet[2191]: E0714 21:58:13.742711 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.755346 kubelet[2191]: E0714 21:58:13.755247 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.755346 kubelet[2191]: W0714 21:58:13.755263 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.755346 kubelet[2191]: E0714 21:58:13.755276 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.755346 kubelet[2191]: I0714 21:58:13.755302 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b453fdfd-5b94-4411-a498-a6ed452275d0-varrun\") pod \"csi-node-driver-6vscj\" (UID: \"b453fdfd-5b94-4411-a498-a6ed452275d0\") " pod="calico-system/csi-node-driver-6vscj" Jul 14 21:58:13.755534 kubelet[2191]: E0714 21:58:13.755462 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.755534 kubelet[2191]: W0714 21:58:13.755471 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.755534 kubelet[2191]: E0714 21:58:13.755486 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.755534 kubelet[2191]: I0714 21:58:13.755501 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b453fdfd-5b94-4411-a498-a6ed452275d0-kubelet-dir\") pod \"csi-node-driver-6vscj\" (UID: \"b453fdfd-5b94-4411-a498-a6ed452275d0\") " pod="calico-system/csi-node-driver-6vscj" Jul 14 21:58:13.755683 kubelet[2191]: E0714 21:58:13.755667 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.755683 kubelet[2191]: W0714 21:58:13.755680 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.755747 kubelet[2191]: E0714 21:58:13.755695 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.755747 kubelet[2191]: I0714 21:58:13.755711 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b453fdfd-5b94-4411-a498-a6ed452275d0-socket-dir\") pod \"csi-node-driver-6vscj\" (UID: \"b453fdfd-5b94-4411-a498-a6ed452275d0\") " pod="calico-system/csi-node-driver-6vscj" Jul 14 21:58:13.755886 kubelet[2191]: E0714 21:58:13.755872 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.755886 kubelet[2191]: W0714 21:58:13.755883 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.755956 kubelet[2191]: E0714 21:58:13.755896 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.755956 kubelet[2191]: I0714 21:58:13.755909 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b453fdfd-5b94-4411-a498-a6ed452275d0-registration-dir\") pod \"csi-node-driver-6vscj\" (UID: \"b453fdfd-5b94-4411-a498-a6ed452275d0\") " pod="calico-system/csi-node-driver-6vscj" Jul 14 21:58:13.756071 kubelet[2191]: E0714 21:58:13.756060 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.756071 kubelet[2191]: W0714 21:58:13.756071 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.756137 kubelet[2191]: E0714 21:58:13.756083 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.756137 kubelet[2191]: I0714 21:58:13.756097 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7m5c\" (UniqueName: \"kubernetes.io/projected/b453fdfd-5b94-4411-a498-a6ed452275d0-kube-api-access-q7m5c\") pod \"csi-node-driver-6vscj\" (UID: \"b453fdfd-5b94-4411-a498-a6ed452275d0\") " pod="calico-system/csi-node-driver-6vscj" Jul 14 21:58:13.756324 kubelet[2191]: E0714 21:58:13.756238 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.756324 kubelet[2191]: W0714 21:58:13.756264 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.756324 kubelet[2191]: E0714 21:58:13.756273 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.756457 kubelet[2191]: E0714 21:58:13.756445 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.756457 kubelet[2191]: W0714 21:58:13.756455 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.756610 kubelet[2191]: E0714 21:58:13.756561 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.756664 kubelet[2191]: E0714 21:58:13.756613 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.756664 kubelet[2191]: W0714 21:58:13.756620 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.756764 kubelet[2191]: E0714 21:58:13.756746 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.756764 kubelet[2191]: W0714 21:58:13.756756 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.756886 kubelet[2191]: E0714 21:58:13.756740 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.756886 kubelet[2191]: E0714 21:58:13.756874 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.756886 kubelet[2191]: W0714 21:58:13.756882 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.756984 kubelet[2191]: E0714 21:58:13.756891 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.757030 kubelet[2191]: E0714 21:58:13.757010 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.757030 kubelet[2191]: W0714 21:58:13.757021 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.757030 kubelet[2191]: E0714 21:58:13.757029 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.757150 kubelet[2191]: E0714 21:58:13.756874 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.757150 kubelet[2191]: E0714 21:58:13.757143 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.757150 kubelet[2191]: W0714 21:58:13.757150 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.757243 kubelet[2191]: E0714 21:58:13.757157 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.757306 kubelet[2191]: E0714 21:58:13.757295 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.757306 kubelet[2191]: W0714 21:58:13.757305 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.757384 kubelet[2191]: E0714 21:58:13.757313 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.757468 kubelet[2191]: E0714 21:58:13.757457 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.757468 kubelet[2191]: W0714 21:58:13.757468 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.757541 kubelet[2191]: E0714 21:58:13.757476 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.757637 kubelet[2191]: E0714 21:58:13.757616 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.757637 kubelet[2191]: W0714 21:58:13.757625 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.757637 kubelet[2191]: E0714 21:58:13.757632 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.856641 kubelet[2191]: E0714 21:58:13.856609 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.856641 kubelet[2191]: W0714 21:58:13.856632 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.856641 kubelet[2191]: E0714 21:58:13.856652 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.856906 kubelet[2191]: E0714 21:58:13.856883 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.856906 kubelet[2191]: W0714 21:58:13.856896 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.856975 kubelet[2191]: E0714 21:58:13.856923 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.857142 kubelet[2191]: E0714 21:58:13.857118 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.857142 kubelet[2191]: W0714 21:58:13.857132 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.857228 kubelet[2191]: E0714 21:58:13.857150 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.857421 kubelet[2191]: E0714 21:58:13.857398 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.857421 kubelet[2191]: W0714 21:58:13.857416 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.857512 kubelet[2191]: E0714 21:58:13.857437 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.857652 kubelet[2191]: E0714 21:58:13.857632 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.857652 kubelet[2191]: W0714 21:58:13.857645 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.857734 kubelet[2191]: E0714 21:58:13.857660 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.858612 kubelet[2191]: E0714 21:58:13.858576 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.858685 kubelet[2191]: W0714 21:58:13.858612 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.858685 kubelet[2191]: E0714 21:58:13.858632 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.858846 kubelet[2191]: E0714 21:58:13.858832 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.858846 kubelet[2191]: W0714 21:58:13.858845 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.858939 kubelet[2191]: E0714 21:58:13.858925 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.859038 kubelet[2191]: E0714 21:58:13.859027 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.859038 kubelet[2191]: W0714 21:58:13.859037 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.859107 kubelet[2191]: E0714 21:58:13.859054 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.859212 kubelet[2191]: E0714 21:58:13.859198 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.859212 kubelet[2191]: W0714 21:58:13.859208 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.859289 kubelet[2191]: E0714 21:58:13.859275 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.859356 kubelet[2191]: E0714 21:58:13.859347 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.859356 kubelet[2191]: W0714 21:58:13.859355 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.859452 kubelet[2191]: E0714 21:58:13.859433 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.859512 kubelet[2191]: E0714 21:58:13.859501 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.859512 kubelet[2191]: W0714 21:58:13.859511 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.859593 kubelet[2191]: E0714 21:58:13.859573 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.859658 kubelet[2191]: E0714 21:58:13.859647 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.859658 kubelet[2191]: W0714 21:58:13.859657 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.859732 kubelet[2191]: E0714 21:58:13.859669 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.859938 kubelet[2191]: E0714 21:58:13.859832 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.859938 kubelet[2191]: W0714 21:58:13.859843 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.859938 kubelet[2191]: E0714 21:58:13.859851 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.860221 kubelet[2191]: E0714 21:58:13.860111 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.860221 kubelet[2191]: W0714 21:58:13.860127 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.860221 kubelet[2191]: E0714 21:58:13.860147 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.860530 kubelet[2191]: E0714 21:58:13.860371 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.860530 kubelet[2191]: W0714 21:58:13.860393 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.860530 kubelet[2191]: E0714 21:58:13.860410 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.860803 kubelet[2191]: E0714 21:58:13.860707 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.860803 kubelet[2191]: W0714 21:58:13.860721 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.860803 kubelet[2191]: E0714 21:58:13.860752 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.861050 kubelet[2191]: E0714 21:58:13.860961 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.861050 kubelet[2191]: W0714 21:58:13.860974 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.861050 kubelet[2191]: E0714 21:58:13.860994 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.861295 kubelet[2191]: E0714 21:58:13.861208 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.861295 kubelet[2191]: W0714 21:58:13.861221 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.861516 kubelet[2191]: E0714 21:58:13.861433 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.861516 kubelet[2191]: W0714 21:58:13.861445 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.861881 kubelet[2191]: E0714 21:58:13.861660 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.861881 kubelet[2191]: E0714 21:58:13.861689 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.861881 kubelet[2191]: E0714 21:58:13.861691 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.861881 kubelet[2191]: W0714 21:58:13.861672 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.861881 kubelet[2191]: E0714 21:58:13.861750 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.862774 kubelet[2191]: E0714 21:58:13.862408 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.862774 kubelet[2191]: W0714 21:58:13.862491 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.862774 kubelet[2191]: E0714 21:58:13.862512 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.863172 kubelet[2191]: E0714 21:58:13.863027 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.863172 kubelet[2191]: W0714 21:58:13.863042 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.863172 kubelet[2191]: E0714 21:58:13.863058 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.863445 kubelet[2191]: E0714 21:58:13.863342 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.863445 kubelet[2191]: W0714 21:58:13.863355 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.863445 kubelet[2191]: E0714 21:58:13.863389 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.863922 kubelet[2191]: E0714 21:58:13.863627 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.863922 kubelet[2191]: W0714 21:58:13.863639 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.863922 kubelet[2191]: E0714 21:58:13.863650 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.864145 kubelet[2191]: E0714 21:58:13.864084 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.864145 kubelet[2191]: W0714 21:58:13.864100 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.864145 kubelet[2191]: E0714 21:58:13.864111 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.871322 kubelet[2191]: E0714 21:58:13.871301 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:13.871440 kubelet[2191]: W0714 21:58:13.871423 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:13.871527 kubelet[2191]: E0714 21:58:13.871513 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:13.998000 audit[2772]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2772 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:14.001686 kernel: kauditd_printk_skb: 19 callbacks suppressed Jul 14 21:58:14.001779 kernel: audit: type=1325 audit(1752530293.998:308): table=filter:97 family=2 entries=20 op=nft_register_rule pid=2772 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:14.001825 kernel: audit: type=1300 audit(1752530293.998:308): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd3149b30 a2=0 a3=1 items=0 ppid=2295 pid=2772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:13.998000 audit[2772]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd3149b30 a2=0 a3=1 items=0 ppid=2295 pid=2772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:13.998000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:14.008604 kernel: audit: type=1327 audit(1752530293.998:308): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:14.008000 audit[2772]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2772 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:14.008000 audit[2772]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd3149b30 a2=0 a3=1 items=0 ppid=2295 pid=2772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:14.014123 kernel: audit: type=1325 audit(1752530294.008:309): table=nat:98 family=2 entries=12 op=nft_register_rule pid=2772 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:14.014194 kernel: audit: type=1300 audit(1752530294.008:309): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd3149b30 a2=0 a3=1 items=0 ppid=2295 pid=2772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:14.014221 kernel: audit: type=1327 audit(1752530294.008:309): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:14.008000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:14.623250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount36471411.mount: Deactivated successfully. Jul 14 21:58:14.904567 kubelet[2191]: E0714 21:58:14.904361 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vscj" podUID="b453fdfd-5b94-4411-a498-a6ed452275d0" Jul 14 21:58:15.367672 env[1319]: time="2025-07-14T21:58:15.367624090Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:15.368794 env[1319]: time="2025-07-14T21:58:15.368763767Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:15.373191 env[1319]: time="2025-07-14T21:58:15.373156948Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:15.374645 env[1319]: time="2025-07-14T21:58:15.374617995Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:15.375343 env[1319]: time="2025-07-14T21:58:15.375314537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 14 21:58:15.382099 env[1319]: time="2025-07-14T21:58:15.382056715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 14 21:58:15.402726 env[1319]: time="2025-07-14T21:58:15.402494692Z" level=info msg="CreateContainer within sandbox \"049bbf373150cf0e481218accef77b02e2484b4c1a0e9d73ced039b156c92dcd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 14 21:58:15.465650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2518517399.mount: Deactivated successfully. Jul 14 21:58:15.468862 env[1319]: time="2025-07-14T21:58:15.468811747Z" level=info msg="CreateContainer within sandbox \"049bbf373150cf0e481218accef77b02e2484b4c1a0e9d73ced039b156c92dcd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dfde4b4ef8c4508ddca4de30e2b1105ee8488af15dd29902dd8c4cb1b63ff8c1\"" Jul 14 21:58:15.469927 env[1319]: time="2025-07-14T21:58:15.469610813Z" level=info msg="StartContainer for \"dfde4b4ef8c4508ddca4de30e2b1105ee8488af15dd29902dd8c4cb1b63ff8c1\"" Jul 14 21:58:15.536775 env[1319]: time="2025-07-14T21:58:15.536718413Z" level=info msg="StartContainer for \"dfde4b4ef8c4508ddca4de30e2b1105ee8488af15dd29902dd8c4cb1b63ff8c1\" returns successfully" Jul 14 21:58:15.969658 kubelet[2191]: E0714 21:58:15.969625 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:58:15.993319 kubelet[2191]: I0714 21:58:15.993242 2191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7789f8f558-jwnkk" podStartSLOduration=1.063951264 podStartE2EDuration="2.993222989s" podCreationTimestamp="2025-07-14 21:58:13 +0000 UTC" firstStartedPulling="2025-07-14 21:58:13.451209899 +0000 UTC m=+24.640813921" lastFinishedPulling="2025-07-14 21:58:15.380481624 +0000 UTC m=+26.570085646" observedRunningTime="2025-07-14 21:58:15.984138577 +0000 UTC m=+27.173742599" watchObservedRunningTime="2025-07-14 21:58:15.993222989 +0000 UTC m=+27.182827051" Jul 14 21:58:16.003000 audit[2823]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=2823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:16.003000 audit[2823]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffeeb68380 a2=0 a3=1 items=0 ppid=2295 pid=2823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:16.010622 kernel: audit: type=1325 audit(1752530296.003:310): table=filter:99 family=2 entries=21 op=nft_register_rule pid=2823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:16.010705 kernel: audit: type=1300 audit(1752530296.003:310): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffeeb68380 a2=0 a3=1 items=0 ppid=2295 pid=2823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:16.010727 kernel: audit: type=1327 audit(1752530296.003:310): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:16.003000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:16.017000 audit[2823]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=2823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:16.017000 audit[2823]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffeeb68380 a2=0 a3=1 items=0 ppid=2295 pid=2823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:16.017000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:16.020599 kernel: audit: type=1325 audit(1752530296.017:311): table=nat:100 family=2 entries=19 op=nft_register_chain pid=2823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:16.061284 kubelet[2191]: E0714 21:58:16.061242 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.061284 kubelet[2191]: W0714 21:58:16.061270 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.061284 kubelet[2191]: E0714 21:58:16.061291 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.061518 kubelet[2191]: E0714 21:58:16.061505 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.061518 kubelet[2191]: W0714 21:58:16.061516 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.061576 kubelet[2191]: E0714 21:58:16.061525 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.061726 kubelet[2191]: E0714 21:58:16.061699 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.061726 kubelet[2191]: W0714 21:58:16.061711 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.061726 kubelet[2191]: E0714 21:58:16.061721 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.061900 kubelet[2191]: E0714 21:58:16.061879 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.061900 kubelet[2191]: W0714 21:58:16.061890 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.061900 kubelet[2191]: E0714 21:58:16.061899 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.062067 kubelet[2191]: E0714 21:58:16.062051 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.062067 kubelet[2191]: W0714 21:58:16.062064 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.062123 kubelet[2191]: E0714 21:58:16.062073 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.063096 kubelet[2191]: E0714 21:58:16.063078 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.063096 kubelet[2191]: W0714 21:58:16.063089 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.063172 kubelet[2191]: E0714 21:58:16.063100 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.063275 kubelet[2191]: E0714 21:58:16.063259 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.063275 kubelet[2191]: W0714 21:58:16.063271 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.063337 kubelet[2191]: E0714 21:58:16.063280 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.063446 kubelet[2191]: E0714 21:58:16.063430 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.063485 kubelet[2191]: W0714 21:58:16.063448 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.063485 kubelet[2191]: E0714 21:58:16.063457 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.063832 kubelet[2191]: E0714 21:58:16.063814 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.063832 kubelet[2191]: W0714 21:58:16.063830 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.063902 kubelet[2191]: E0714 21:58:16.063848 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.064031 kubelet[2191]: E0714 21:58:16.064019 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.064031 kubelet[2191]: W0714 21:58:16.064029 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.064080 kubelet[2191]: E0714 21:58:16.064037 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.064180 kubelet[2191]: E0714 21:58:16.064167 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.064213 kubelet[2191]: W0714 21:58:16.064181 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.064213 kubelet[2191]: E0714 21:58:16.064189 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.064324 kubelet[2191]: E0714 21:58:16.064313 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.064354 kubelet[2191]: W0714 21:58:16.064325 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.064354 kubelet[2191]: E0714 21:58:16.064333 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.064500 kubelet[2191]: E0714 21:58:16.064488 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.064500 kubelet[2191]: W0714 21:58:16.064499 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.064553 kubelet[2191]: E0714 21:58:16.064508 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.064674 kubelet[2191]: E0714 21:58:16.064663 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.064701 kubelet[2191]: W0714 21:58:16.064674 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.064701 kubelet[2191]: E0714 21:58:16.064682 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.064815 kubelet[2191]: E0714 21:58:16.064805 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.064841 kubelet[2191]: W0714 21:58:16.064819 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.064841 kubelet[2191]: E0714 21:58:16.064828 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.076191 kubelet[2191]: E0714 21:58:16.076171 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.076191 kubelet[2191]: W0714 21:58:16.076191 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.076315 kubelet[2191]: E0714 21:58:16.076203 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.076386 kubelet[2191]: E0714 21:58:16.076372 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.076386 kubelet[2191]: W0714 21:58:16.076382 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.076455 kubelet[2191]: E0714 21:58:16.076394 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.076637 kubelet[2191]: E0714 21:58:16.076626 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.076637 kubelet[2191]: W0714 21:58:16.076636 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.076709 kubelet[2191]: E0714 21:58:16.076649 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.076955 kubelet[2191]: E0714 21:58:16.076936 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.076955 kubelet[2191]: W0714 21:58:16.076952 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.077019 kubelet[2191]: E0714 21:58:16.076968 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.077149 kubelet[2191]: E0714 21:58:16.077131 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.077149 kubelet[2191]: W0714 21:58:16.077147 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.077205 kubelet[2191]: E0714 21:58:16.077161 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.077372 kubelet[2191]: E0714 21:58:16.077356 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.077413 kubelet[2191]: W0714 21:58:16.077372 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.077413 kubelet[2191]: E0714 21:58:16.077388 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.077609 kubelet[2191]: E0714 21:58:16.077579 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.077654 kubelet[2191]: W0714 21:58:16.077609 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.077654 kubelet[2191]: E0714 21:58:16.077618 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.078278 kubelet[2191]: E0714 21:58:16.078241 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.078278 kubelet[2191]: W0714 21:58:16.078264 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.078539 kubelet[2191]: E0714 21:58:16.078284 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.078602 kubelet[2191]: E0714 21:58:16.078555 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.078602 kubelet[2191]: W0714 21:58:16.078567 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.078602 kubelet[2191]: E0714 21:58:16.078597 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.078759 kubelet[2191]: E0714 21:58:16.078746 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.078759 kubelet[2191]: W0714 21:58:16.078757 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.078808 kubelet[2191]: E0714 21:58:16.078770 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.078956 kubelet[2191]: E0714 21:58:16.078945 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.078956 kubelet[2191]: W0714 21:58:16.078955 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.079006 kubelet[2191]: E0714 21:58:16.078978 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.079095 kubelet[2191]: E0714 21:58:16.079083 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.079095 kubelet[2191]: W0714 21:58:16.079093 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.079146 kubelet[2191]: E0714 21:58:16.079111 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.079233 kubelet[2191]: E0714 21:58:16.079222 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.079233 kubelet[2191]: W0714 21:58:16.079231 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.079285 kubelet[2191]: E0714 21:58:16.079243 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.079425 kubelet[2191]: E0714 21:58:16.079413 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.079425 kubelet[2191]: W0714 21:58:16.079424 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.079503 kubelet[2191]: E0714 21:58:16.079436 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.080832 kubelet[2191]: E0714 21:58:16.080805 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.080832 kubelet[2191]: W0714 21:58:16.080821 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.080926 kubelet[2191]: E0714 21:58:16.080838 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.081162 kubelet[2191]: E0714 21:58:16.081131 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.081162 kubelet[2191]: W0714 21:58:16.081148 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.081222 kubelet[2191]: E0714 21:58:16.081165 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.081411 kubelet[2191]: E0714 21:58:16.081385 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.081411 kubelet[2191]: W0714 21:58:16.081401 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.081531 kubelet[2191]: E0714 21:58:16.081515 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.081790 kubelet[2191]: E0714 21:58:16.081775 2191 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:58:16.081790 kubelet[2191]: W0714 21:58:16.081789 2191 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:58:16.081854 kubelet[2191]: E0714 21:58:16.081800 2191 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:58:16.352287 env[1319]: time="2025-07-14T21:58:16.352246735Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:16.353938 env[1319]: time="2025-07-14T21:58:16.353900428Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:16.355210 env[1319]: time="2025-07-14T21:58:16.355170468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:16.356575 env[1319]: time="2025-07-14T21:58:16.356547392Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:16.357099 env[1319]: time="2025-07-14T21:58:16.357076529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 14 21:58:16.359279 env[1319]: time="2025-07-14T21:58:16.359248878Z" level=info msg="CreateContainer within sandbox \"169bc2b5aa7ddc7f2f11cc42844aa9ac43dee5b3c12e543cc8f046337470b5cc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 14 21:58:16.369567 env[1319]: time="2025-07-14T21:58:16.369487044Z" level=info msg="CreateContainer within sandbox \"169bc2b5aa7ddc7f2f11cc42844aa9ac43dee5b3c12e543cc8f046337470b5cc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8f5ebf5dedcc74d96af3e53779a625a2881a174a4e3ca3d9e1e60ea101f36c0c\"" Jul 14 21:58:16.369874 env[1319]: time="2025-07-14T21:58:16.369846175Z" level=info msg="StartContainer for \"8f5ebf5dedcc74d96af3e53779a625a2881a174a4e3ca3d9e1e60ea101f36c0c\"" Jul 14 21:58:16.476203 env[1319]: time="2025-07-14T21:58:16.476154197Z" level=info msg="StartContainer for \"8f5ebf5dedcc74d96af3e53779a625a2881a174a4e3ca3d9e1e60ea101f36c0c\" returns successfully" Jul 14 21:58:16.518472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f5ebf5dedcc74d96af3e53779a625a2881a174a4e3ca3d9e1e60ea101f36c0c-rootfs.mount: Deactivated successfully. Jul 14 21:58:16.542138 env[1319]: time="2025-07-14T21:58:16.541574679Z" level=info msg="shim disconnected" id=8f5ebf5dedcc74d96af3e53779a625a2881a174a4e3ca3d9e1e60ea101f36c0c Jul 14 21:58:16.542138 env[1319]: time="2025-07-14T21:58:16.541722523Z" level=warning msg="cleaning up after shim disconnected" id=8f5ebf5dedcc74d96af3e53779a625a2881a174a4e3ca3d9e1e60ea101f36c0c namespace=k8s.io Jul 14 21:58:16.542138 env[1319]: time="2025-07-14T21:58:16.541731924Z" level=info msg="cleaning up dead shim" Jul 14 21:58:16.548343 env[1319]: time="2025-07-14T21:58:16.548298293Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:58:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2902 runtime=io.containerd.runc.v2\n" Jul 14 21:58:16.904436 kubelet[2191]: E0714 21:58:16.904385 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vscj" podUID="b453fdfd-5b94-4411-a498-a6ed452275d0" Jul 14 21:58:16.971157 kubelet[2191]: E0714 21:58:16.970873 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:58:16.971743 env[1319]: time="2025-07-14T21:58:16.971670963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 14 21:58:18.902468 kubelet[2191]: E0714 21:58:18.902420 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vscj" podUID="b453fdfd-5b94-4411-a498-a6ed452275d0" Jul 14 21:58:19.806531 env[1319]: time="2025-07-14T21:58:19.806485200Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:19.808589 env[1319]: time="2025-07-14T21:58:19.808540224Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:19.810084 env[1319]: time="2025-07-14T21:58:19.810040950Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:19.811795 env[1319]: time="2025-07-14T21:58:19.811759323Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:19.812398 env[1319]: time="2025-07-14T21:58:19.812373662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 14 21:58:19.816106 env[1319]: time="2025-07-14T21:58:19.816067416Z" level=info msg="CreateContainer within sandbox \"169bc2b5aa7ddc7f2f11cc42844aa9ac43dee5b3c12e543cc8f046337470b5cc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 14 21:58:19.828416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3257775129.mount: Deactivated successfully. Jul 14 21:58:19.832202 env[1319]: time="2025-07-14T21:58:19.832159832Z" level=info msg="CreateContainer within sandbox \"169bc2b5aa7ddc7f2f11cc42844aa9ac43dee5b3c12e543cc8f046337470b5cc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"048d256becd1548bb5e364cce764c69bc4a999291de38c2eea99ecc60aa1c244\"" Jul 14 21:58:19.833136 env[1319]: time="2025-07-14T21:58:19.833098341Z" level=info msg="StartContainer for \"048d256becd1548bb5e364cce764c69bc4a999291de38c2eea99ecc60aa1c244\"" Jul 14 21:58:19.920220 env[1319]: time="2025-07-14T21:58:19.919689970Z" level=info msg="StartContainer for \"048d256becd1548bb5e364cce764c69bc4a999291de38c2eea99ecc60aa1c244\" returns successfully" Jul 14 21:58:20.594742 env[1319]: time="2025-07-14T21:58:20.594678723Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:58:20.613435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-048d256becd1548bb5e364cce764c69bc4a999291de38c2eea99ecc60aa1c244-rootfs.mount: Deactivated successfully. Jul 14 21:58:20.619019 env[1319]: time="2025-07-14T21:58:20.618966104Z" level=info msg="shim disconnected" id=048d256becd1548bb5e364cce764c69bc4a999291de38c2eea99ecc60aa1c244 Jul 14 21:58:20.619019 env[1319]: time="2025-07-14T21:58:20.619014386Z" level=warning msg="cleaning up after shim disconnected" id=048d256becd1548bb5e364cce764c69bc4a999291de38c2eea99ecc60aa1c244 namespace=k8s.io Jul 14 21:58:20.619019 env[1319]: time="2025-07-14T21:58:20.619024706Z" level=info msg="cleaning up dead shim" Jul 14 21:58:20.625988 env[1319]: time="2025-07-14T21:58:20.625945637Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:58:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2972 runtime=io.containerd.runc.v2\n" Jul 14 21:58:20.674787 kubelet[2191]: I0714 21:58:20.674482 2191 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 14 21:58:20.708231 kubelet[2191]: I0714 21:58:20.707939 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7023e1db-2106-48dc-85a1-3f1e832bd4ba-config-volume\") pod \"coredns-7c65d6cfc9-tbjx5\" (UID: \"7023e1db-2106-48dc-85a1-3f1e832bd4ba\") " pod="kube-system/coredns-7c65d6cfc9-tbjx5" Jul 14 21:58:20.708231 kubelet[2191]: I0714 21:58:20.707987 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xf7w\" (UniqueName: \"kubernetes.io/projected/7023e1db-2106-48dc-85a1-3f1e832bd4ba-kube-api-access-6xf7w\") pod \"coredns-7c65d6cfc9-tbjx5\" (UID: \"7023e1db-2106-48dc-85a1-3f1e832bd4ba\") " pod="kube-system/coredns-7c65d6cfc9-tbjx5" Jul 14 21:58:20.809187 kubelet[2191]: I0714 21:58:20.809135 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrtgw\" (UniqueName: \"kubernetes.io/projected/6e01068b-a03a-4c0c-99a9-7e9275cb210b-kube-api-access-zrtgw\") pod \"calico-apiserver-79b975cf4d-tvnn9\" (UID: \"6e01068b-a03a-4c0c-99a9-7e9275cb210b\") " pod="calico-apiserver/calico-apiserver-79b975cf4d-tvnn9" Jul 14 21:58:20.809187 kubelet[2191]: I0714 21:58:20.809195 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksqzj\" (UniqueName: \"kubernetes.io/projected/e1c592a9-3faf-4978-af8d-8d83292a3475-kube-api-access-ksqzj\") pod \"calico-apiserver-79b975cf4d-9xgmn\" (UID: \"e1c592a9-3faf-4978-af8d-8d83292a3475\") " pod="calico-apiserver/calico-apiserver-79b975cf4d-9xgmn" Jul 14 21:58:20.809382 kubelet[2191]: I0714 21:58:20.809215 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fb397a8-167c-4a3c-b754-5643d7b757de-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-8mddv\" (UID: \"0fb397a8-167c-4a3c-b754-5643d7b757de\") " pod="calico-system/goldmane-58fd7646b9-8mddv" Jul 14 21:58:20.809382 kubelet[2191]: I0714 21:58:20.809230 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdsf4\" (UniqueName: \"kubernetes.io/projected/0fb397a8-167c-4a3c-b754-5643d7b757de-kube-api-access-rdsf4\") pod \"goldmane-58fd7646b9-8mddv\" (UID: \"0fb397a8-167c-4a3c-b754-5643d7b757de\") " pod="calico-system/goldmane-58fd7646b9-8mddv" Jul 14 21:58:20.809382 kubelet[2191]: I0714 21:58:20.809245 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e1c592a9-3faf-4978-af8d-8d83292a3475-calico-apiserver-certs\") pod \"calico-apiserver-79b975cf4d-9xgmn\" (UID: \"e1c592a9-3faf-4978-af8d-8d83292a3475\") " pod="calico-apiserver/calico-apiserver-79b975cf4d-9xgmn" Jul 14 21:58:20.809382 kubelet[2191]: I0714 21:58:20.809273 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlvfh\" (UniqueName: \"kubernetes.io/projected/a8fc1316-6b04-4d95-89ba-2535a5175aa9-kube-api-access-zlvfh\") pod \"coredns-7c65d6cfc9-gzbm6\" (UID: \"a8fc1316-6b04-4d95-89ba-2535a5175aa9\") " pod="kube-system/coredns-7c65d6cfc9-gzbm6" Jul 14 21:58:20.809382 kubelet[2191]: I0714 21:58:20.809290 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6e01068b-a03a-4c0c-99a9-7e9275cb210b-calico-apiserver-certs\") pod \"calico-apiserver-79b975cf4d-tvnn9\" (UID: \"6e01068b-a03a-4c0c-99a9-7e9275cb210b\") " pod="calico-apiserver/calico-apiserver-79b975cf4d-tvnn9" Jul 14 21:58:20.809547 kubelet[2191]: I0714 21:58:20.809311 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0fb397a8-167c-4a3c-b754-5643d7b757de-goldmane-key-pair\") pod \"goldmane-58fd7646b9-8mddv\" (UID: \"0fb397a8-167c-4a3c-b754-5643d7b757de\") " pod="calico-system/goldmane-58fd7646b9-8mddv" Jul 14 21:58:20.809547 kubelet[2191]: I0714 21:58:20.809331 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v249\" (UniqueName: \"kubernetes.io/projected/4e6ebd15-2e3d-40ae-9a5f-701f3c026863-kube-api-access-7v249\") pod \"whisker-6fbfc6dcd-bcd56\" (UID: \"4e6ebd15-2e3d-40ae-9a5f-701f3c026863\") " pod="calico-system/whisker-6fbfc6dcd-bcd56" Jul 14 21:58:20.809547 kubelet[2191]: I0714 21:58:20.809358 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr5zm\" (UniqueName: \"kubernetes.io/projected/c1a1b271-e606-49ed-b47b-b98b88fdbed2-kube-api-access-zr5zm\") pod \"calico-kube-controllers-67865bb6d5-jb527\" (UID: \"c1a1b271-e606-49ed-b47b-b98b88fdbed2\") " pod="calico-system/calico-kube-controllers-67865bb6d5-jb527" Jul 14 21:58:20.809547 kubelet[2191]: I0714 21:58:20.809377 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fb397a8-167c-4a3c-b754-5643d7b757de-config\") pod \"goldmane-58fd7646b9-8mddv\" (UID: \"0fb397a8-167c-4a3c-b754-5643d7b757de\") " pod="calico-system/goldmane-58fd7646b9-8mddv" Jul 14 21:58:20.809547 kubelet[2191]: I0714 21:58:20.809405 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4e6ebd15-2e3d-40ae-9a5f-701f3c026863-whisker-backend-key-pair\") pod \"whisker-6fbfc6dcd-bcd56\" (UID: \"4e6ebd15-2e3d-40ae-9a5f-701f3c026863\") " pod="calico-system/whisker-6fbfc6dcd-bcd56" Jul 14 21:58:20.809724 kubelet[2191]: I0714 21:58:20.809430 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6ebd15-2e3d-40ae-9a5f-701f3c026863-whisker-ca-bundle\") pod \"whisker-6fbfc6dcd-bcd56\" (UID: \"4e6ebd15-2e3d-40ae-9a5f-701f3c026863\") " pod="calico-system/whisker-6fbfc6dcd-bcd56" Jul 14 21:58:20.809724 kubelet[2191]: I0714 21:58:20.809464 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8fc1316-6b04-4d95-89ba-2535a5175aa9-config-volume\") pod \"coredns-7c65d6cfc9-gzbm6\" (UID: \"a8fc1316-6b04-4d95-89ba-2535a5175aa9\") " pod="kube-system/coredns-7c65d6cfc9-gzbm6" Jul 14 21:58:20.809724 kubelet[2191]: I0714 21:58:20.809483 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1a1b271-e606-49ed-b47b-b98b88fdbed2-tigera-ca-bundle\") pod \"calico-kube-controllers-67865bb6d5-jb527\" (UID: \"c1a1b271-e606-49ed-b47b-b98b88fdbed2\") " pod="calico-system/calico-kube-controllers-67865bb6d5-jb527" Jul 14 21:58:20.906916 env[1319]: time="2025-07-14T21:58:20.905682059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6vscj,Uid:b453fdfd-5b94-4411-a498-a6ed452275d0,Namespace:calico-system,Attempt:0,}" Jul 14 21:58:20.983222 env[1319]: time="2025-07-14T21:58:20.982345319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 14 21:58:21.001204 kubelet[2191]: E0714 21:58:21.001172 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:58:21.001861 env[1319]: time="2025-07-14T21:58:21.001820314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tbjx5,Uid:7023e1db-2106-48dc-85a1-3f1e832bd4ba,Namespace:kube-system,Attempt:0,}" Jul 14 21:58:21.010435 env[1319]: time="2025-07-14T21:58:21.010392053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79b975cf4d-9xgmn,Uid:e1c592a9-3faf-4978-af8d-8d83292a3475,Namespace:calico-apiserver,Attempt:0,}" Jul 14 21:58:21.012499 kubelet[2191]: E0714 21:58:21.012476 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:58:21.013169 env[1319]: time="2025-07-14T21:58:21.013133856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gzbm6,Uid:a8fc1316-6b04-4d95-89ba-2535a5175aa9,Namespace:kube-system,Attempt:0,}" Jul 14 21:58:21.021142 env[1319]: time="2025-07-14T21:58:21.021095537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79b975cf4d-tvnn9,Uid:6e01068b-a03a-4c0c-99a9-7e9275cb210b,Namespace:calico-apiserver,Attempt:0,}" Jul 14 21:58:21.028979 env[1319]: time="2025-07-14T21:58:21.028944375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-8mddv,Uid:0fb397a8-167c-4a3c-b754-5643d7b757de,Namespace:calico-system,Attempt:0,}" Jul 14 21:58:21.052690 env[1319]: time="2025-07-14T21:58:21.052530328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fbfc6dcd-bcd56,Uid:4e6ebd15-2e3d-40ae-9a5f-701f3c026863,Namespace:calico-system,Attempt:0,}" Jul 14 21:58:21.052845 env[1319]: time="2025-07-14T21:58:21.052530968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67865bb6d5-jb527,Uid:c1a1b271-e606-49ed-b47b-b98b88fdbed2,Namespace:calico-system,Attempt:0,}" Jul 14 21:58:21.129665 env[1319]: time="2025-07-14T21:58:21.129565419Z" level=error msg="Failed to destroy network for sandbox \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.129988 env[1319]: time="2025-07-14T21:58:21.129951951Z" level=error msg="encountered an error cleaning up failed sandbox \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.130042 env[1319]: time="2025-07-14T21:58:21.129996872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tbjx5,Uid:7023e1db-2106-48dc-85a1-3f1e832bd4ba,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.130243 kubelet[2191]: E0714 21:58:21.130200 2191 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.130307 kubelet[2191]: E0714 21:58:21.130270 2191 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tbjx5" Jul 14 21:58:21.130307 kubelet[2191]: E0714 21:58:21.130290 2191 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tbjx5" Jul 14 21:58:21.130381 kubelet[2191]: E0714 21:58:21.130327 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tbjx5_kube-system(7023e1db-2106-48dc-85a1-3f1e832bd4ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tbjx5_kube-system(7023e1db-2106-48dc-85a1-3f1e832bd4ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tbjx5" podUID="7023e1db-2106-48dc-85a1-3f1e832bd4ba" Jul 14 21:58:21.137291 env[1319]: time="2025-07-14T21:58:21.137239492Z" level=error msg="Failed to destroy network for sandbox \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.137775 env[1319]: time="2025-07-14T21:58:21.137741627Z" level=error msg="encountered an error cleaning up failed sandbox \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.137913 env[1319]: time="2025-07-14T21:58:21.137883751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6vscj,Uid:b453fdfd-5b94-4411-a498-a6ed452275d0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.138214 kubelet[2191]: E0714 21:58:21.138178 2191 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.138300 kubelet[2191]: E0714 21:58:21.138235 2191 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6vscj" Jul 14 21:58:21.138300 kubelet[2191]: E0714 21:58:21.138254 2191 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6vscj" Jul 14 21:58:21.138352 kubelet[2191]: E0714 21:58:21.138294 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6vscj_calico-system(b453fdfd-5b94-4411-a498-a6ed452275d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6vscj_calico-system(b453fdfd-5b94-4411-a498-a6ed452275d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6vscj" podUID="b453fdfd-5b94-4411-a498-a6ed452275d0" Jul 14 21:58:21.171685 env[1319]: time="2025-07-14T21:58:21.170674383Z" level=error msg="Failed to destroy network for sandbox \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.173000 env[1319]: time="2025-07-14T21:58:21.172946972Z" level=error msg="encountered an error cleaning up failed sandbox \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.173747 env[1319]: time="2025-07-14T21:58:21.173688115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79b975cf4d-9xgmn,Uid:e1c592a9-3faf-4978-af8d-8d83292a3475,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.173974 kubelet[2191]: E0714 21:58:21.173933 2191 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.174047 kubelet[2191]: E0714 21:58:21.173998 2191 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79b975cf4d-9xgmn" Jul 14 21:58:21.174047 kubelet[2191]: E0714 21:58:21.174022 2191 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79b975cf4d-9xgmn" Jul 14 21:58:21.174116 kubelet[2191]: E0714 21:58:21.174089 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79b975cf4d-9xgmn_calico-apiserver(e1c592a9-3faf-4978-af8d-8d83292a3475)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79b975cf4d-9xgmn_calico-apiserver(e1c592a9-3faf-4978-af8d-8d83292a3475)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79b975cf4d-9xgmn" podUID="e1c592a9-3faf-4978-af8d-8d83292a3475" Jul 14 21:58:21.176062 env[1319]: time="2025-07-14T21:58:21.176017985Z" level=error msg="Failed to destroy network for sandbox \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.176452 env[1319]: time="2025-07-14T21:58:21.176418237Z" level=error msg="encountered an error cleaning up failed sandbox \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.176510 env[1319]: time="2025-07-14T21:58:21.176483999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79b975cf4d-tvnn9,Uid:6e01068b-a03a-4c0c-99a9-7e9275cb210b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.176776 kubelet[2191]: E0714 21:58:21.176727 2191 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.176848 kubelet[2191]: E0714 21:58:21.176800 2191 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79b975cf4d-tvnn9" Jul 14 21:58:21.176848 kubelet[2191]: E0714 21:58:21.176820 2191 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79b975cf4d-tvnn9" Jul 14 21:58:21.176917 kubelet[2191]: E0714 21:58:21.176881 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79b975cf4d-tvnn9_calico-apiserver(6e01068b-a03a-4c0c-99a9-7e9275cb210b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79b975cf4d-tvnn9_calico-apiserver(6e01068b-a03a-4c0c-99a9-7e9275cb210b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79b975cf4d-tvnn9" podUID="6e01068b-a03a-4c0c-99a9-7e9275cb210b" Jul 14 21:58:21.189672 env[1319]: time="2025-07-14T21:58:21.189613596Z" level=error msg="Failed to destroy network for sandbox \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.190030 env[1319]: time="2025-07-14T21:58:21.189998968Z" level=error msg="encountered an error cleaning up failed sandbox \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.190089 env[1319]: time="2025-07-14T21:58:21.190049250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-8mddv,Uid:0fb397a8-167c-4a3c-b754-5643d7b757de,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.190298 kubelet[2191]: E0714 21:58:21.190264 2191 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.190410 kubelet[2191]: E0714 21:58:21.190323 2191 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-8mddv" Jul 14 21:58:21.190410 kubelet[2191]: E0714 21:58:21.190341 2191 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-8mddv" Jul 14 21:58:21.190410 kubelet[2191]: E0714 21:58:21.190378 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-8mddv_calico-system(0fb397a8-167c-4a3c-b754-5643d7b757de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-8mddv_calico-system(0fb397a8-167c-4a3c-b754-5643d7b757de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-8mddv" podUID="0fb397a8-167c-4a3c-b754-5643d7b757de" Jul 14 21:58:21.192686 env[1319]: time="2025-07-14T21:58:21.192623208Z" level=error msg="Failed to destroy network for sandbox \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.193209 env[1319]: time="2025-07-14T21:58:21.193178024Z" level=error msg="encountered an error cleaning up failed sandbox \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.193387 env[1319]: time="2025-07-14T21:58:21.193333989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fbfc6dcd-bcd56,Uid:4e6ebd15-2e3d-40ae-9a5f-701f3c026863,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.193920 kubelet[2191]: E0714 21:58:21.193877 2191 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.194009 kubelet[2191]: E0714 21:58:21.193942 2191 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fbfc6dcd-bcd56" Jul 14 21:58:21.194009 kubelet[2191]: E0714 21:58:21.193973 2191 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fbfc6dcd-bcd56" Jul 14 21:58:21.194075 kubelet[2191]: E0714 21:58:21.194007 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6fbfc6dcd-bcd56_calico-system(4e6ebd15-2e3d-40ae-9a5f-701f3c026863)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6fbfc6dcd-bcd56_calico-system(4e6ebd15-2e3d-40ae-9a5f-701f3c026863)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6fbfc6dcd-bcd56" podUID="4e6ebd15-2e3d-40ae-9a5f-701f3c026863" Jul 14 21:58:21.195080 env[1319]: time="2025-07-14T21:58:21.195042281Z" level=error msg="Failed to destroy network for sandbox \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.195545 env[1319]: time="2025-07-14T21:58:21.195360930Z" level=error msg="encountered an error cleaning up failed sandbox \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.195857 env[1319]: time="2025-07-14T21:58:21.195805224Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gzbm6,Uid:a8fc1316-6b04-4d95-89ba-2535a5175aa9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.196009 kubelet[2191]: E0714 21:58:21.195974 2191 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.196069 kubelet[2191]: E0714 21:58:21.196016 2191 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gzbm6" Jul 14 21:58:21.196069 kubelet[2191]: E0714 21:58:21.196030 2191 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gzbm6" Jul 14 21:58:21.196125 kubelet[2191]: E0714 21:58:21.196065 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gzbm6_kube-system(a8fc1316-6b04-4d95-89ba-2535a5175aa9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gzbm6_kube-system(a8fc1316-6b04-4d95-89ba-2535a5175aa9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gzbm6" podUID="a8fc1316-6b04-4d95-89ba-2535a5175aa9" Jul 14 21:58:21.207166 env[1319]: time="2025-07-14T21:58:21.207101206Z" level=error msg="Failed to destroy network for sandbox \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.207496 env[1319]: time="2025-07-14T21:58:21.207467017Z" level=error msg="encountered an error cleaning up failed sandbox \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.207547 env[1319]: time="2025-07-14T21:58:21.207514218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67865bb6d5-jb527,Uid:c1a1b271-e606-49ed-b47b-b98b88fdbed2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.207790 kubelet[2191]: E0714 21:58:21.207752 2191 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:21.207846 kubelet[2191]: E0714 21:58:21.207813 2191 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67865bb6d5-jb527" Jul 14 21:58:21.207846 kubelet[2191]: E0714 21:58:21.207832 2191 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67865bb6d5-jb527" Jul 14 21:58:21.207916 kubelet[2191]: E0714 21:58:21.207872 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67865bb6d5-jb527_calico-system(c1a1b271-e606-49ed-b47b-b98b88fdbed2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67865bb6d5-jb527_calico-system(c1a1b271-e606-49ed-b47b-b98b88fdbed2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67865bb6d5-jb527" podUID="c1a1b271-e606-49ed-b47b-b98b88fdbed2" Jul 14 21:58:21.827968 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5-shm.mount: Deactivated successfully. Jul 14 21:58:21.828107 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c-shm.mount: Deactivated successfully. Jul 14 21:58:21.982165 kubelet[2191]: I0714 21:58:21.982128 2191 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Jul 14 21:58:21.983202 env[1319]: time="2025-07-14T21:58:21.983161649Z" level=info msg="StopPodSandbox for \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\"" Jul 14 21:58:21.984369 kubelet[2191]: I0714 21:58:21.984308 2191 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Jul 14 21:58:21.985036 env[1319]: time="2025-07-14T21:58:21.984977864Z" level=info msg="StopPodSandbox for \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\"" Jul 14 21:58:21.991626 kubelet[2191]: I0714 21:58:21.991454 2191 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Jul 14 21:58:21.992700 env[1319]: time="2025-07-14T21:58:21.992655577Z" level=info msg="StopPodSandbox for \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\"" Jul 14 21:58:21.998108 kubelet[2191]: I0714 21:58:21.997536 2191 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Jul 14 21:58:21.998197 env[1319]: time="2025-07-14T21:58:21.997997538Z" level=info msg="StopPodSandbox for \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\"" Jul 14 21:58:21.998817 kubelet[2191]: I0714 21:58:21.998566 2191 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Jul 14 21:58:21.999365 env[1319]: time="2025-07-14T21:58:21.999262857Z" level=info msg="StopPodSandbox for \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\"" Jul 14 21:58:22.003495 kubelet[2191]: I0714 21:58:22.003137 2191 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Jul 14 21:58:22.005265 kubelet[2191]: I0714 21:58:22.004939 2191 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Jul 14 21:58:22.005559 env[1319]: time="2025-07-14T21:58:22.005519725Z" level=info msg="StopPodSandbox for \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\"" Jul 14 21:58:22.006100 env[1319]: time="2025-07-14T21:58:22.006070781Z" level=info msg="StopPodSandbox for \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\"" Jul 14 21:58:22.007969 kubelet[2191]: I0714 21:58:22.007627 2191 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Jul 14 21:58:22.008207 env[1319]: time="2025-07-14T21:58:22.008170684Z" level=info msg="StopPodSandbox for \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\"" Jul 14 21:58:22.047413 env[1319]: time="2025-07-14T21:58:22.045394521Z" level=error msg="StopPodSandbox for \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\" failed" error="failed to destroy network for sandbox \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:22.047709 kubelet[2191]: E0714 21:58:22.046018 2191 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Jul 14 21:58:22.047709 kubelet[2191]: E0714 21:58:22.046490 2191 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5"} Jul 14 21:58:22.050628 kubelet[2191]: E0714 21:58:22.050330 2191 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7023e1db-2106-48dc-85a1-3f1e832bd4ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:58:22.050628 kubelet[2191]: E0714 21:58:22.050382 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7023e1db-2106-48dc-85a1-3f1e832bd4ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tbjx5" podUID="7023e1db-2106-48dc-85a1-3f1e832bd4ba" Jul 14 21:58:22.054785 env[1319]: time="2025-07-14T21:58:22.054564196Z" level=error msg="StopPodSandbox for \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\" failed" error="failed to destroy network for sandbox \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:22.054949 kubelet[2191]: E0714 21:58:22.054845 2191 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Jul 14 21:58:22.054949 kubelet[2191]: E0714 21:58:22.054896 2191 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe"} Jul 14 21:58:22.054949 kubelet[2191]: E0714 21:58:22.054944 2191 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0fb397a8-167c-4a3c-b754-5643d7b757de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:58:22.055092 kubelet[2191]: E0714 21:58:22.054967 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0fb397a8-167c-4a3c-b754-5643d7b757de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-8mddv" podUID="0fb397a8-167c-4a3c-b754-5643d7b757de" Jul 14 21:58:22.067855 env[1319]: time="2025-07-14T21:58:22.067790873Z" level=error msg="StopPodSandbox for \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\" failed" error="failed to destroy network for sandbox \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:22.068424 kubelet[2191]: E0714 21:58:22.068224 2191 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Jul 14 21:58:22.070512 kubelet[2191]: E0714 21:58:22.070451 2191 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677"} Jul 14 21:58:22.070656 kubelet[2191]: E0714 21:58:22.070517 2191 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e1c592a9-3faf-4978-af8d-8d83292a3475\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:58:22.070656 kubelet[2191]: E0714 21:58:22.070542 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e1c592a9-3faf-4978-af8d-8d83292a3475\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79b975cf4d-9xgmn" podUID="e1c592a9-3faf-4978-af8d-8d83292a3475" Jul 14 21:58:22.085437 env[1319]: time="2025-07-14T21:58:22.085373241Z" level=error msg="StopPodSandbox for \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\" failed" error="failed to destroy network for sandbox \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:22.085685 kubelet[2191]: E0714 21:58:22.085641 2191 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Jul 14 21:58:22.085760 kubelet[2191]: E0714 21:58:22.085699 2191 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879"} Jul 14 21:58:22.085760 kubelet[2191]: E0714 21:58:22.085740 2191 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c1a1b271-e606-49ed-b47b-b98b88fdbed2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:58:22.085845 kubelet[2191]: E0714 21:58:22.085765 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c1a1b271-e606-49ed-b47b-b98b88fdbed2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67865bb6d5-jb527" podUID="c1a1b271-e606-49ed-b47b-b98b88fdbed2" Jul 14 21:58:22.094857 env[1319]: time="2025-07-14T21:58:22.094800364Z" level=error msg="StopPodSandbox for \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\" failed" error="failed to destroy network for sandbox \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:22.095508 kubelet[2191]: E0714 21:58:22.095451 2191 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Jul 14 21:58:22.095642 kubelet[2191]: E0714 21:58:22.095517 2191 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705"} Jul 14 21:58:22.095642 kubelet[2191]: E0714 21:58:22.095553 2191 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8fc1316-6b04-4d95-89ba-2535a5175aa9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:58:22.095642 kubelet[2191]: E0714 21:58:22.095573 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8fc1316-6b04-4d95-89ba-2535a5175aa9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gzbm6" podUID="a8fc1316-6b04-4d95-89ba-2535a5175aa9" Jul 14 21:58:22.096160 env[1319]: time="2025-07-14T21:58:22.096123203Z" level=error msg="StopPodSandbox for \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\" failed" error="failed to destroy network for sandbox \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:22.096517 kubelet[2191]: E0714 21:58:22.096399 2191 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Jul 14 21:58:22.096631 kubelet[2191]: E0714 21:58:22.096525 2191 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380"} Jul 14 21:58:22.096631 kubelet[2191]: E0714 21:58:22.096573 2191 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e01068b-a03a-4c0c-99a9-7e9275cb210b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:58:22.097639 kubelet[2191]: E0714 21:58:22.097576 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e01068b-a03a-4c0c-99a9-7e9275cb210b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79b975cf4d-tvnn9" podUID="6e01068b-a03a-4c0c-99a9-7e9275cb210b" Jul 14 21:58:22.102031 env[1319]: time="2025-07-14T21:58:22.101991219Z" level=error msg="StopPodSandbox for \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\" failed" error="failed to destroy network for sandbox \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:22.102290 kubelet[2191]: E0714 21:58:22.102257 2191 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Jul 14 21:58:22.102362 kubelet[2191]: E0714 21:58:22.102299 2191 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c"} Jul 14 21:58:22.102362 kubelet[2191]: E0714 21:58:22.102324 2191 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b453fdfd-5b94-4411-a498-a6ed452275d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:58:22.102362 kubelet[2191]: E0714 21:58:22.102343 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b453fdfd-5b94-4411-a498-a6ed452275d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6vscj" podUID="b453fdfd-5b94-4411-a498-a6ed452275d0" Jul 14 21:58:22.103942 env[1319]: time="2025-07-14T21:58:22.103898597Z" level=error msg="StopPodSandbox for \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\" failed" error="failed to destroy network for sandbox \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:58:22.104284 kubelet[2191]: E0714 21:58:22.104250 2191 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Jul 14 21:58:22.104336 kubelet[2191]: E0714 21:58:22.104291 2191 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005"} Jul 14 21:58:22.104336 kubelet[2191]: E0714 21:58:22.104315 2191 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e6ebd15-2e3d-40ae-9a5f-701f3c026863\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:58:22.104407 kubelet[2191]: E0714 21:58:22.104333 2191 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e6ebd15-2e3d-40ae-9a5f-701f3c026863\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6fbfc6dcd-bcd56" podUID="4e6ebd15-2e3d-40ae-9a5f-701f3c026863" Jul 14 21:58:26.380153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3421340493.mount: Deactivated successfully. Jul 14 21:58:26.656399 env[1319]: time="2025-07-14T21:58:26.656285665Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:26.661205 env[1319]: time="2025-07-14T21:58:26.661164722Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:26.662334 env[1319]: time="2025-07-14T21:58:26.662308947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:26.663500 env[1319]: time="2025-07-14T21:58:26.663471692Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:26.663832 env[1319]: time="2025-07-14T21:58:26.663804568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 14 21:58:26.676804 env[1319]: time="2025-07-14T21:58:26.676709321Z" level=info msg="CreateContainer within sandbox \"169bc2b5aa7ddc7f2f11cc42844aa9ac43dee5b3c12e543cc8f046337470b5cc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 14 21:58:26.689724 env[1319]: time="2025-07-14T21:58:26.689679993Z" level=info msg="CreateContainer within sandbox \"169bc2b5aa7ddc7f2f11cc42844aa9ac43dee5b3c12e543cc8f046337470b5cc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"92ad90cc85a743a68a530f8402f9f31f0d3df3e158813ecbd6f2fbacf0a6a0c9\"" Jul 14 21:58:26.690425 env[1319]: time="2025-07-14T21:58:26.690393464Z" level=info msg="StartContainer for \"92ad90cc85a743a68a530f8402f9f31f0d3df3e158813ecbd6f2fbacf0a6a0c9\"" Jul 14 21:58:26.777149 env[1319]: time="2025-07-14T21:58:26.777093623Z" level=info msg="StartContainer for \"92ad90cc85a743a68a530f8402f9f31f0d3df3e158813ecbd6f2fbacf0a6a0c9\" returns successfully" Jul 14 21:58:27.013612 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 14 21:58:27.013736 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 14 21:58:27.053787 kubelet[2191]: I0714 21:58:27.053711 2191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6xt6k" podStartSLOduration=1.1291936360000001 podStartE2EDuration="14.053694785s" podCreationTimestamp="2025-07-14 21:58:13 +0000 UTC" firstStartedPulling="2025-07-14 21:58:13.740327086 +0000 UTC m=+24.929931068" lastFinishedPulling="2025-07-14 21:58:26.664828195 +0000 UTC m=+37.854432217" observedRunningTime="2025-07-14 21:58:27.053454748 +0000 UTC m=+38.243058770" watchObservedRunningTime="2025-07-14 21:58:27.053694785 +0000 UTC m=+38.243298767" Jul 14 21:58:27.129236 env[1319]: time="2025-07-14T21:58:27.129192930Z" level=info msg="StopPodSandbox for \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\"" Jul 14 21:58:27.322039 env[1319]: 2025-07-14 21:58:27.210 [INFO][3510] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Jul 14 21:58:27.322039 env[1319]: 2025-07-14 21:58:27.211 [INFO][3510] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" iface="eth0" netns="/var/run/netns/cni-1518225e-6914-75aa-49f8-8184a9a4affa" Jul 14 21:58:27.322039 env[1319]: 2025-07-14 21:58:27.212 [INFO][3510] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" iface="eth0" netns="/var/run/netns/cni-1518225e-6914-75aa-49f8-8184a9a4affa" Jul 14 21:58:27.322039 env[1319]: 2025-07-14 21:58:27.212 [INFO][3510] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" iface="eth0" netns="/var/run/netns/cni-1518225e-6914-75aa-49f8-8184a9a4affa" Jul 14 21:58:27.322039 env[1319]: 2025-07-14 21:58:27.212 [INFO][3510] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Jul 14 21:58:27.322039 env[1319]: 2025-07-14 21:58:27.213 [INFO][3510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Jul 14 21:58:27.322039 env[1319]: 2025-07-14 21:58:27.302 [INFO][3520] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" HandleID="k8s-pod-network.4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Workload="localhost-k8s-whisker--6fbfc6dcd--bcd56-eth0" Jul 14 21:58:27.322039 env[1319]: 2025-07-14 21:58:27.302 [INFO][3520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:27.322039 env[1319]: 2025-07-14 21:58:27.302 [INFO][3520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:27.322039 env[1319]: 2025-07-14 21:58:27.316 [WARNING][3520] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" HandleID="k8s-pod-network.4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Workload="localhost-k8s-whisker--6fbfc6dcd--bcd56-eth0" Jul 14 21:58:27.322039 env[1319]: 2025-07-14 21:58:27.316 [INFO][3520] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" HandleID="k8s-pod-network.4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Workload="localhost-k8s-whisker--6fbfc6dcd--bcd56-eth0" Jul 14 21:58:27.322039 env[1319]: 2025-07-14 21:58:27.318 [INFO][3520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:27.322039 env[1319]: 2025-07-14 21:58:27.320 [INFO][3510] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Jul 14 21:58:27.322455 env[1319]: time="2025-07-14T21:58:27.322353762Z" level=info msg="TearDown network for sandbox \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\" successfully" Jul 14 21:58:27.322455 env[1319]: time="2025-07-14T21:58:27.322387681Z" level=info msg="StopPodSandbox for \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\" returns successfully" Jul 14 21:58:27.349084 kubelet[2191]: I0714 21:58:27.348977 2191 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4e6ebd15-2e3d-40ae-9a5f-701f3c026863-whisker-backend-key-pair\") pod \"4e6ebd15-2e3d-40ae-9a5f-701f3c026863\" (UID: \"4e6ebd15-2e3d-40ae-9a5f-701f3c026863\") " Jul 14 21:58:27.349084 kubelet[2191]: I0714 21:58:27.349094 2191 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v249\" (UniqueName: \"kubernetes.io/projected/4e6ebd15-2e3d-40ae-9a5f-701f3c026863-kube-api-access-7v249\") pod \"4e6ebd15-2e3d-40ae-9a5f-701f3c026863\" (UID: \"4e6ebd15-2e3d-40ae-9a5f-701f3c026863\") " Jul 14 21:58:27.349288 kubelet[2191]: I0714 21:58:27.349123 2191 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6ebd15-2e3d-40ae-9a5f-701f3c026863-whisker-ca-bundle\") pod \"4e6ebd15-2e3d-40ae-9a5f-701f3c026863\" (UID: \"4e6ebd15-2e3d-40ae-9a5f-701f3c026863\") " Jul 14 21:58:27.352545 kubelet[2191]: I0714 21:58:27.352507 2191 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e6ebd15-2e3d-40ae-9a5f-701f3c026863-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4e6ebd15-2e3d-40ae-9a5f-701f3c026863" (UID: "4e6ebd15-2e3d-40ae-9a5f-701f3c026863"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 21:58:27.353352 kubelet[2191]: I0714 21:58:27.353323 2191 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e6ebd15-2e3d-40ae-9a5f-701f3c026863-kube-api-access-7v249" (OuterVolumeSpecName: "kube-api-access-7v249") pod "4e6ebd15-2e3d-40ae-9a5f-701f3c026863" (UID: "4e6ebd15-2e3d-40ae-9a5f-701f3c026863"). InnerVolumeSpecName "kube-api-access-7v249". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 21:58:27.354580 kubelet[2191]: I0714 21:58:27.354547 2191 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6ebd15-2e3d-40ae-9a5f-701f3c026863-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4e6ebd15-2e3d-40ae-9a5f-701f3c026863" (UID: "4e6ebd15-2e3d-40ae-9a5f-701f3c026863"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 14 21:58:27.380936 systemd[1]: run-netns-cni\x2d1518225e\x2d6914\x2d75aa\x2d49f8\x2d8184a9a4affa.mount: Deactivated successfully. Jul 14 21:58:27.381069 systemd[1]: var-lib-kubelet-pods-4e6ebd15\x2d2e3d\x2d40ae\x2d9a5f\x2d701f3c026863-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7v249.mount: Deactivated successfully. Jul 14 21:58:27.381163 systemd[1]: var-lib-kubelet-pods-4e6ebd15\x2d2e3d\x2d40ae\x2d9a5f\x2d701f3c026863-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 14 21:58:27.450328 kubelet[2191]: I0714 21:58:27.450282 2191 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4e6ebd15-2e3d-40ae-9a5f-701f3c026863-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 14 21:58:27.450328 kubelet[2191]: I0714 21:58:27.450319 2191 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v249\" (UniqueName: \"kubernetes.io/projected/4e6ebd15-2e3d-40ae-9a5f-701f3c026863-kube-api-access-7v249\") on node \"localhost\" DevicePath \"\"" Jul 14 21:58:27.450328 kubelet[2191]: I0714 21:58:27.450330 2191 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e6ebd15-2e3d-40ae-9a5f-701f3c026863-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 14 21:58:28.058225 systemd[1]: run-containerd-runc-k8s.io-92ad90cc85a743a68a530f8402f9f31f0d3df3e158813ecbd6f2fbacf0a6a0c9-runc.VPhJB0.mount: Deactivated successfully. Jul 14 21:58:28.155310 kubelet[2191]: I0714 21:58:28.155248 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jw6h\" (UniqueName: \"kubernetes.io/projected/913c5eec-9e8f-4194-807a-eb71db291448-kube-api-access-2jw6h\") pod \"whisker-547bf5c54b-8dw4g\" (UID: \"913c5eec-9e8f-4194-807a-eb71db291448\") " pod="calico-system/whisker-547bf5c54b-8dw4g" Jul 14 21:58:28.155670 kubelet[2191]: I0714 21:58:28.155339 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/913c5eec-9e8f-4194-807a-eb71db291448-whisker-ca-bundle\") pod \"whisker-547bf5c54b-8dw4g\" (UID: \"913c5eec-9e8f-4194-807a-eb71db291448\") " pod="calico-system/whisker-547bf5c54b-8dw4g" Jul 14 21:58:28.155670 kubelet[2191]: I0714 21:58:28.155401 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/913c5eec-9e8f-4194-807a-eb71db291448-whisker-backend-key-pair\") pod \"whisker-547bf5c54b-8dw4g\" (UID: \"913c5eec-9e8f-4194-807a-eb71db291448\") " pod="calico-system/whisker-547bf5c54b-8dw4g" Jul 14 21:58:28.328000 audit[3601]: AVC avc: denied { write } for pid=3601 comm="tee" name="fd" dev="proc" ino=20003 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 14 21:58:28.330135 kernel: kauditd_printk_skb: 2 callbacks suppressed Jul 14 21:58:28.330206 kernel: audit: type=1400 audit(1752530308.328:312): avc: denied { write } for pid=3601 comm="tee" name="fd" dev="proc" ino=20003 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 14 21:58:28.328000 audit[3601]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffebc867d5 a2=241 a3=1b6 items=1 ppid=3576 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.335179 kernel: audit: type=1300 audit(1752530308.328:312): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffebc867d5 a2=241 a3=1b6 items=1 ppid=3576 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.335279 kernel: audit: type=1307 audit(1752530308.328:312): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 14 21:58:28.328000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 14 21:58:28.328000 audit: PATH item=0 name="/dev/fd/63" inode=20000 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 21:58:28.339689 kernel: audit: type=1302 audit(1752530308.328:312): item=0 name="/dev/fd/63" inode=20000 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 21:58:28.339757 kernel: audit: type=1327 audit(1752530308.328:312): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 14 21:58:28.328000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 14 21:58:28.340000 audit[3622]: AVC avc: denied { write } for pid=3622 comm="tee" name="fd" dev="proc" ino=20008 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 14 21:58:28.340000 audit[3622]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcc45c7e7 a2=241 a3=1b6 items=1 ppid=3584 pid=3622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.345935 kernel: audit: type=1400 audit(1752530308.340:313): avc: denied { write } for pid=3622 comm="tee" name="fd" dev="proc" ino=20008 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 14 21:58:28.346018 kernel: audit: type=1300 audit(1752530308.340:313): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcc45c7e7 a2=241 a3=1b6 items=1 ppid=3584 pid=3622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.346038 kernel: audit: type=1307 audit(1752530308.340:313): cwd="/etc/service/enabled/cni/log" Jul 14 21:58:28.340000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 14 21:58:28.340000 audit: PATH item=0 name="/dev/fd/63" inode=20005 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 21:58:28.348393 kernel: audit: type=1302 audit(1752530308.340:313): item=0 name="/dev/fd/63" inode=20005 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 21:58:28.348478 kernel: audit: type=1327 audit(1752530308.340:313): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 14 21:58:28.340000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 14 21:58:28.347000 audit[3633]: AVC avc: denied { write } for pid=3633 comm="tee" name="fd" dev="proc" ino=19111 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 14 21:58:28.347000 audit[3633]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe82c77e5 a2=241 a3=1b6 items=1 ppid=3583 pid=3633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.347000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 14 21:58:28.347000 audit: PATH item=0 name="/dev/fd/63" inode=18164 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 21:58:28.347000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 14 21:58:28.354000 audit[3647]: AVC avc: denied { write } for pid=3647 comm="tee" name="fd" dev="proc" ino=20014 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 14 21:58:28.354000 audit[3647]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcb06e7d6 a2=241 a3=1b6 items=1 ppid=3578 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.354000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 14 21:58:28.354000 audit: PATH item=0 name="/dev/fd/63" inode=20651 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 21:58:28.354000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 14 21:58:28.365000 audit[3642]: AVC avc: denied { write } for pid=3642 comm="tee" name="fd" dev="proc" ino=20658 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 14 21:58:28.365000 audit[3642]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffab707e5 a2=241 a3=1b6 items=1 ppid=3589 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.365000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 14 21:58:28.365000 audit: PATH item=0 name="/dev/fd/63" inode=18169 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 21:58:28.365000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 14 21:58:28.380000 audit[3657]: AVC avc: denied { write } for pid=3657 comm="tee" name="fd" dev="proc" ino=20665 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 14 21:58:28.380000 audit[3657]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc220b7e5 a2=241 a3=1b6 items=1 ppid=3586 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.380000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 14 21:58:28.380000 audit: PATH item=0 name="/dev/fd/63" inode=20662 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 21:58:28.380000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 14 21:58:28.383440 env[1319]: time="2025-07-14T21:58:28.383385670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-547bf5c54b-8dw4g,Uid:913c5eec-9e8f-4194-807a-eb71db291448,Namespace:calico-system,Attempt:0,}" Jul 14 21:58:28.386000 audit[3653]: AVC avc: denied { write } for pid=3653 comm="tee" name="fd" dev="proc" ino=19122 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 14 21:58:28.386000 audit[3653]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffce33c7e6 a2=241 a3=1b6 items=1 ppid=3577 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.386000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 14 21:58:28.386000 audit: PATH item=0 name="/dev/fd/63" inode=20016 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 21:58:28.386000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 14 21:58:28.567209 systemd-networkd[1104]: califcc7417d7e8: Link UP Jul 14 21:58:28.568614 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 14 21:58:28.568715 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califcc7417d7e8: link becomes ready Jul 14 21:58:28.569743 systemd-networkd[1104]: califcc7417d7e8: Gained carrier Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { bpf } for pid=3721 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { bpf } for pid=3721 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { perfmon } for pid=3721 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { perfmon } for pid=3721 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { perfmon } for pid=3721 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { perfmon } for pid=3721 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { perfmon } for pid=3721 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { bpf } for pid=3721 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { bpf } for pid=3721 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit: BPF prog-id=10 op=LOAD Jul 14 21:58:28.577000 audit[3721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffce6b87f8 a2=98 a3=ffffce6b87e8 items=0 ppid=3588 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.577000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 14 21:58:28.577000 audit: BPF prog-id=10 op=UNLOAD Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { bpf } for pid=3721 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { bpf } for pid=3721 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { perfmon } for pid=3721 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { perfmon } for pid=3721 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { perfmon } for pid=3721 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { perfmon } for pid=3721 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { perfmon } for pid=3721 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { bpf } for pid=3721 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { bpf } for pid=3721 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit: BPF prog-id=11 op=LOAD Jul 14 21:58:28.577000 audit[3721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffce6b86a8 a2=74 a3=95 items=0 ppid=3588 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.577000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 14 21:58:28.577000 audit: BPF prog-id=11 op=UNLOAD Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { bpf } for pid=3721 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { bpf } for pid=3721 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { perfmon } for pid=3721 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { perfmon } for pid=3721 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { perfmon } for pid=3721 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { perfmon } for pid=3721 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { perfmon } for pid=3721 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { bpf } for pid=3721 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { bpf } for pid=3721 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit: BPF prog-id=12 op=LOAD Jul 14 21:58:28.577000 audit[3721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffce6b86d8 a2=40 a3=ffffce6b8708 items=0 ppid=3588 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.577000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 14 21:58:28.577000 audit: BPF prog-id=12 op=UNLOAD Jul 14 21:58:28.577000 audit[3721]: AVC avc: denied { perfmon } for pid=3721 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.577000 audit[3721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffce6b87f0 a2=50 a3=0 items=0 ppid=3588 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.577000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 14 21:58:28.579000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.579000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.579000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.579000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.579000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.579000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.579000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.579000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.579000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.579000 audit: BPF prog-id=13 op=LOAD Jul 14 21:58:28.579000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd1aecb98 a2=98 a3=ffffd1aecb88 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.579000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.581000 audit: BPF prog-id=13 op=UNLOAD Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit: BPF prog-id=14 op=LOAD Jul 14 21:58:28.581000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd1aec828 a2=74 a3=95 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.581000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.581000 audit: BPF prog-id=14 op=UNLOAD Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.581000 audit: BPF prog-id=15 op=LOAD Jul 14 21:58:28.581000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd1aec888 a2=94 a3=2 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.581000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.581000 audit: BPF prog-id=15 op=UNLOAD Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.433 [INFO][3658] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.462 [INFO][3658] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--547bf5c54b--8dw4g-eth0 whisker-547bf5c54b- calico-system 913c5eec-9e8f-4194-807a-eb71db291448 941 0 2025-07-14 21:58:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:547bf5c54b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-547bf5c54b-8dw4g eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califcc7417d7e8 [] [] }} ContainerID="68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" Namespace="calico-system" Pod="whisker-547bf5c54b-8dw4g" WorkloadEndpoint="localhost-k8s-whisker--547bf5c54b--8dw4g-" Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.463 [INFO][3658] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" Namespace="calico-system" Pod="whisker-547bf5c54b-8dw4g" WorkloadEndpoint="localhost-k8s-whisker--547bf5c54b--8dw4g-eth0" Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.497 [INFO][3680] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" HandleID="k8s-pod-network.68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" Workload="localhost-k8s-whisker--547bf5c54b--8dw4g-eth0" Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.498 [INFO][3680] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" HandleID="k8s-pod-network.68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" Workload="localhost-k8s-whisker--547bf5c54b--8dw4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dc7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-547bf5c54b-8dw4g", "timestamp":"2025-07-14 21:58:28.497290399 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.498 [INFO][3680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.498 [INFO][3680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.498 [INFO][3680] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.513 [INFO][3680] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" host="localhost" Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.529 [INFO][3680] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.533 [INFO][3680] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.535 [INFO][3680] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.539 [INFO][3680] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.539 [INFO][3680] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" host="localhost" Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.541 [INFO][3680] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2 Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.545 [INFO][3680] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" host="localhost" Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.550 [INFO][3680] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" host="localhost" Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.550 [INFO][3680] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" host="localhost" Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.550 [INFO][3680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:28.588007 env[1319]: 2025-07-14 21:58:28.551 [INFO][3680] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" HandleID="k8s-pod-network.68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" Workload="localhost-k8s-whisker--547bf5c54b--8dw4g-eth0" Jul 14 21:58:28.588533 env[1319]: 2025-07-14 21:58:28.553 [INFO][3658] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" Namespace="calico-system" Pod="whisker-547bf5c54b-8dw4g" WorkloadEndpoint="localhost-k8s-whisker--547bf5c54b--8dw4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--547bf5c54b--8dw4g-eth0", GenerateName:"whisker-547bf5c54b-", Namespace:"calico-system", SelfLink:"", UID:"913c5eec-9e8f-4194-807a-eb71db291448", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"547bf5c54b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-547bf5c54b-8dw4g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califcc7417d7e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:28.588533 env[1319]: 2025-07-14 21:58:28.553 [INFO][3658] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" Namespace="calico-system" Pod="whisker-547bf5c54b-8dw4g" WorkloadEndpoint="localhost-k8s-whisker--547bf5c54b--8dw4g-eth0" Jul 14 21:58:28.588533 env[1319]: 2025-07-14 21:58:28.553 [INFO][3658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califcc7417d7e8 ContainerID="68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" Namespace="calico-system" Pod="whisker-547bf5c54b-8dw4g" WorkloadEndpoint="localhost-k8s-whisker--547bf5c54b--8dw4g-eth0" Jul 14 21:58:28.588533 env[1319]: 2025-07-14 21:58:28.570 [INFO][3658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" Namespace="calico-system" Pod="whisker-547bf5c54b-8dw4g" WorkloadEndpoint="localhost-k8s-whisker--547bf5c54b--8dw4g-eth0" Jul 14 21:58:28.588533 env[1319]: 2025-07-14 21:58:28.571 [INFO][3658] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" Namespace="calico-system" Pod="whisker-547bf5c54b-8dw4g" WorkloadEndpoint="localhost-k8s-whisker--547bf5c54b--8dw4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--547bf5c54b--8dw4g-eth0", GenerateName:"whisker-547bf5c54b-", Namespace:"calico-system", SelfLink:"", UID:"913c5eec-9e8f-4194-807a-eb71db291448", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"547bf5c54b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2", Pod:"whisker-547bf5c54b-8dw4g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califcc7417d7e8", MAC:"4e:ff:6c:7f:14:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:28.588533 env[1319]: 2025-07-14 21:58:28.582 [INFO][3658] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2" Namespace="calico-system" Pod="whisker-547bf5c54b-8dw4g" WorkloadEndpoint="localhost-k8s-whisker--547bf5c54b--8dw4g-eth0" Jul 14 21:58:28.600499 env[1319]: time="2025-07-14T21:58:28.599510855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:58:28.600499 env[1319]: time="2025-07-14T21:58:28.599562414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:58:28.600499 env[1319]: time="2025-07-14T21:58:28.599572254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:58:28.600499 env[1319]: time="2025-07-14T21:58:28.599795332Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2 pid=3737 runtime=io.containerd.runc.v2 Jul 14 21:58:28.661974 systemd-resolved[1240]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:58:28.681000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.681000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.681000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.681000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.681000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.681000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.681000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.681000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.681000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.681000 audit: BPF prog-id=16 op=LOAD Jul 14 21:58:28.681000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd1aec848 a2=40 a3=ffffd1aec878 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.681000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.681000 audit: BPF prog-id=16 op=UNLOAD Jul 14 21:58:28.681000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.681000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffd1aec960 a2=50 a3=0 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.681000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.683544 env[1319]: time="2025-07-14T21:58:28.683492668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-547bf5c54b-8dw4g,Uid:913c5eec-9e8f-4194-807a-eb71db291448,Namespace:calico-system,Attempt:0,} returns sandbox id \"68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2\"" Jul 14 21:58:28.686269 env[1319]: time="2025-07-14T21:58:28.685010931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd1aec8b8 a2=28 a3=ffffd1aec9e8 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd1aec8e8 a2=28 a3=ffffd1aeca18 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd1aec798 a2=28 a3=ffffd1aec8c8 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd1aec908 a2=28 a3=ffffd1aeca38 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd1aec8e8 a2=28 a3=ffffd1aeca18 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd1aec8d8 a2=28 a3=ffffd1aeca08 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd1aec908 a2=28 a3=ffffd1aeca38 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd1aec8e8 a2=28 a3=ffffd1aeca18 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd1aec908 a2=28 a3=ffffd1aeca38 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd1aec8d8 a2=28 a3=ffffd1aeca08 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd1aec958 a2=28 a3=ffffd1aeca98 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd1aec690 a2=50 a3=0 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit: BPF prog-id=17 op=LOAD Jul 14 21:58:28.692000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd1aec698 a2=94 a3=5 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.692000 audit: BPF prog-id=17 op=UNLOAD Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd1aec7a0 a2=50 a3=0 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffd1aec8e8 a2=4 a3=3 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.692000 audit[3722]: AVC avc: denied { confidentiality } for pid=3722 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 14 21:58:28.692000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd1aec8c8 a2=94 a3=6 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { confidentiality } for pid=3722 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 14 21:58:28.693000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd1aec098 a2=94 a3=83 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.693000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.693000 audit[3722]: AVC avc: denied { confidentiality } for pid=3722 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 14 21:58:28.693000 audit[3722]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd1aec098 a2=94 a3=83 items=0 ppid=3588 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.693000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { bpf } for pid=3772 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { bpf } for pid=3772 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { perfmon } for pid=3772 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { perfmon } for pid=3772 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { perfmon } for pid=3772 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { perfmon } for pid=3772 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { perfmon } for pid=3772 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { bpf } for pid=3772 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { bpf } for pid=3772 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit: BPF prog-id=18 op=LOAD Jul 14 21:58:28.711000 audit[3772]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd216d418 a2=98 a3=ffffd216d408 items=0 ppid=3588 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.711000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 14 21:58:28.711000 audit: BPF prog-id=18 op=UNLOAD Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { bpf } for pid=3772 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { bpf } for pid=3772 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { perfmon } for pid=3772 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { perfmon } for pid=3772 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { perfmon } for pid=3772 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { perfmon } for pid=3772 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { perfmon } for pid=3772 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { bpf } for pid=3772 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { bpf } for pid=3772 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit: BPF prog-id=19 op=LOAD Jul 14 21:58:28.711000 audit[3772]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd216d2c8 a2=74 a3=95 items=0 ppid=3588 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.711000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 14 21:58:28.711000 audit: BPF prog-id=19 op=UNLOAD Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { bpf } for pid=3772 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { bpf } for pid=3772 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { perfmon } for pid=3772 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { perfmon } for pid=3772 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { perfmon } for pid=3772 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { perfmon } for pid=3772 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { perfmon } for pid=3772 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { bpf } for pid=3772 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit[3772]: AVC avc: denied { bpf } for pid=3772 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.711000 audit: BPF prog-id=20 op=LOAD Jul 14 21:58:28.711000 audit[3772]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd216d2f8 a2=40 a3=ffffd216d328 items=0 ppid=3588 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.711000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 14 21:58:28.711000 audit: BPF prog-id=20 op=UNLOAD Jul 14 21:58:28.777633 systemd-networkd[1104]: vxlan.calico: Link UP Jul 14 21:58:28.777640 systemd-networkd[1104]: vxlan.calico: Gained carrier Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit: BPF prog-id=21 op=LOAD Jul 14 21:58:28.786000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe067d238 a2=98 a3=ffffe067d228 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.786000 audit: BPF prog-id=21 op=UNLOAD Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit: BPF prog-id=22 op=LOAD Jul 14 21:58:28.786000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe067cf18 a2=74 a3=95 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.786000 audit: BPF prog-id=22 op=UNLOAD Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit: BPF prog-id=23 op=LOAD Jul 14 21:58:28.786000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe067cf78 a2=94 a3=2 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.786000 audit: BPF prog-id=23 op=UNLOAD Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffe067cfa8 a2=28 a3=ffffe067d0d8 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe067cfd8 a2=28 a3=ffffe067d108 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe067ce88 a2=28 a3=ffffe067cfb8 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffe067cff8 a2=28 a3=ffffe067d128 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffe067cfd8 a2=28 a3=ffffe067d108 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.786000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.786000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffe067cfc8 a2=28 a3=ffffe067d0f8 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffe067cff8 a2=28 a3=ffffe067d128 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.787000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe067cfd8 a2=28 a3=ffffe067d108 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.787000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe067cff8 a2=28 a3=ffffe067d128 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.787000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe067cfc8 a2=28 a3=ffffe067d0f8 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.787000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffe067d048 a2=28 a3=ffffe067d188 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.787000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit: BPF prog-id=24 op=LOAD Jul 14 21:58:28.787000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe067ce68 a2=40 a3=ffffe067ce98 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.787000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.787000 audit: BPF prog-id=24 op=UNLOAD Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffe067ce90 a2=50 a3=0 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.787000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffe067ce90 a2=50 a3=0 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.787000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit: BPF prog-id=25 op=LOAD Jul 14 21:58:28.787000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe067c5f8 a2=94 a3=2 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.787000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.787000 audit: BPF prog-id=25 op=UNLOAD Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { perfmon } for pid=3798 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit[3798]: AVC avc: denied { bpf } for pid=3798 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.787000 audit: BPF prog-id=26 op=LOAD Jul 14 21:58:28.787000 audit[3798]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe067c788 a2=94 a3=30 items=0 ppid=3588 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.787000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit: BPF prog-id=27 op=LOAD Jul 14 21:58:28.794000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcbda1508 a2=98 a3=ffffcbda14f8 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.794000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.794000 audit: BPF prog-id=27 op=UNLOAD Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit: BPF prog-id=28 op=LOAD Jul 14 21:58:28.794000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcbda1198 a2=74 a3=95 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.794000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.794000 audit: BPF prog-id=28 op=UNLOAD Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.794000 audit: BPF prog-id=29 op=LOAD Jul 14 21:58:28.794000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcbda11f8 a2=94 a3=2 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.794000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.794000 audit: BPF prog-id=29 op=UNLOAD Jul 14 21:58:28.881000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.881000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.881000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.881000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.881000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.881000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.881000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.881000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.881000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.881000 audit: BPF prog-id=30 op=LOAD Jul 14 21:58:28.881000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcbda11b8 a2=40 a3=ffffcbda11e8 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.882000 audit: BPF prog-id=30 op=UNLOAD Jul 14 21:58:28.883000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.883000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffcbda12d0 a2=50 a3=0 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.883000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.895000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.895000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcbda1228 a2=28 a3=ffffcbda1358 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.895000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.896000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.896000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcbda1258 a2=28 a3=ffffcbda1388 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.896000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.896000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.896000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcbda1108 a2=28 a3=ffffcbda1238 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.896000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.896000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.896000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcbda1278 a2=28 a3=ffffcbda13a8 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.896000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.896000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.896000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcbda1258 a2=28 a3=ffffcbda1388 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.896000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.896000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.896000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcbda1248 a2=28 a3=ffffcbda1378 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.896000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.897000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.897000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcbda1278 a2=28 a3=ffffcbda13a8 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.897000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.897000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.897000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcbda1258 a2=28 a3=ffffcbda1388 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.897000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.897000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.897000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcbda1278 a2=28 a3=ffffcbda13a8 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.897000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.897000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.897000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcbda1248 a2=28 a3=ffffcbda1378 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.897000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.897000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.897000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcbda12c8 a2=28 a3=ffffcbda1408 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.897000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.897000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.897000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcbda1000 a2=50 a3=0 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.897000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.898000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.898000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.898000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.898000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.898000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.898000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.898000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.898000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.898000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.898000 audit: BPF prog-id=31 op=LOAD Jul 14 21:58:28.898000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcbda1008 a2=94 a3=5 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.898000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.899000 audit: BPF prog-id=31 op=UNLOAD Jul 14 21:58:28.899000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.899000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcbda1110 a2=50 a3=0 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.899000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.899000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.899000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffcbda1258 a2=4 a3=3 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.899000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.899000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.899000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.899000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.899000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.899000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.899000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.899000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.899000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.899000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.899000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.899000 audit[3802]: AVC avc: denied { confidentiality } for pid=3802 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 14 21:58:28.899000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcbda1238 a2=94 a3=6 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.899000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.900000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.900000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.900000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.900000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.900000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.900000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.900000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.900000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.900000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.900000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.900000 audit[3802]: AVC avc: denied { confidentiality } for pid=3802 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 14 21:58:28.900000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcbda0a08 a2=94 a3=83 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.900000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.901000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.901000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.901000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.901000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.901000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.901000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.901000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.901000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.901000 audit[3802]: AVC avc: denied { perfmon } for pid=3802 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.901000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.901000 audit[3802]: AVC avc: denied { confidentiality } for pid=3802 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 14 21:58:28.901000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcbda0a08 a2=94 a3=83 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.901000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.902000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.902000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcbda2448 a2=10 a3=ffffcbda2538 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.902000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.902000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.902000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcbda2308 a2=10 a3=ffffcbda23f8 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.902000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.902000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.902000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcbda2278 a2=10 a3=ffffcbda23f8 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.902000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.902000 audit[3802]: AVC avc: denied { bpf } for pid=3802 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 21:58:28.902000 audit[3802]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcbda2278 a2=10 a3=ffffcbda23f8 items=0 ppid=3588 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.902000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 21:58:28.905143 kubelet[2191]: I0714 21:58:28.905111 2191 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e6ebd15-2e3d-40ae-9a5f-701f3c026863" path="/var/lib/kubelet/pods/4e6ebd15-2e3d-40ae-9a5f-701f3c026863/volumes" Jul 14 21:58:28.913000 audit: BPF prog-id=26 op=UNLOAD Jul 14 21:58:28.953000 audit[3830]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=3830 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 21:58:28.953000 audit[3830]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc2c98b90 a2=0 a3=ffff8be0ffa8 items=0 ppid=3588 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.953000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 21:58:28.956000 audit[3829]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=3829 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 21:58:28.956000 audit[3829]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffd7cc95d0 a2=0 a3=ffffa6062fa8 items=0 ppid=3588 pid=3829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.956000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 21:58:28.961000 audit[3828]: NETFILTER_CFG table=raw:103 family=2 entries=21 op=nft_register_chain pid=3828 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 21:58:28.961000 audit[3828]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffd6955220 a2=0 a3=ffff858f8fa8 items=0 ppid=3588 pid=3828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.961000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 21:58:28.970000 audit[3833]: NETFILTER_CFG table=filter:104 family=2 entries=94 op=nft_register_chain pid=3833 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 21:58:28.970000 audit[3833]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffcaff2d40 a2=0 a3=ffff92cf9fa8 items=0 ppid=3588 pid=3833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:28.970000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 21:58:29.699014 systemd-networkd[1104]: califcc7417d7e8: Gained IPv6LL Jul 14 21:58:29.951191 env[1319]: time="2025-07-14T21:58:29.951080935Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:29.952671 env[1319]: time="2025-07-14T21:58:29.952642999Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:29.954163 env[1319]: time="2025-07-14T21:58:29.954134225Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:29.955455 env[1319]: time="2025-07-14T21:58:29.955432732Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:29.955958 env[1319]: time="2025-07-14T21:58:29.955920127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 14 21:58:29.959573 env[1319]: time="2025-07-14T21:58:29.959529852Z" level=info msg="CreateContainer within sandbox \"68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 14 21:58:29.970082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1071868460.mount: Deactivated successfully. Jul 14 21:58:29.971230 env[1319]: time="2025-07-14T21:58:29.971198938Z" level=info msg="CreateContainer within sandbox \"68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ccdafe3e64182fa037a84c3f79cea1b4473f9656d380dc231faeacab50c1d736\"" Jul 14 21:58:29.971690 env[1319]: time="2025-07-14T21:58:29.971667653Z" level=info msg="StartContainer for \"ccdafe3e64182fa037a84c3f79cea1b4473f9656d380dc231faeacab50c1d736\"" Jul 14 21:58:30.046771 env[1319]: time="2025-07-14T21:58:30.046726723Z" level=info msg="StartContainer for \"ccdafe3e64182fa037a84c3f79cea1b4473f9656d380dc231faeacab50c1d736\" returns successfully" Jul 14 21:58:30.048903 env[1319]: time="2025-07-14T21:58:30.048864424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 14 21:58:30.658786 systemd-networkd[1104]: vxlan.calico: Gained IPv6LL Jul 14 21:58:31.870695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1458502600.mount: Deactivated successfully. Jul 14 21:58:31.883896 env[1319]: time="2025-07-14T21:58:31.883856774Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:31.885240 env[1319]: time="2025-07-14T21:58:31.885205803Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:31.887197 env[1319]: time="2025-07-14T21:58:31.887170868Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:31.888655 env[1319]: time="2025-07-14T21:58:31.888627056Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:31.889165 env[1319]: time="2025-07-14T21:58:31.889124972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 14 21:58:31.891612 env[1319]: time="2025-07-14T21:58:31.891257235Z" level=info msg="CreateContainer within sandbox \"68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 14 21:58:31.902466 env[1319]: time="2025-07-14T21:58:31.902408468Z" level=info msg="CreateContainer within sandbox \"68666b71e2903fbb8258383c8d8c48885c1f6a011e856a236b1e88920b0922c2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1c5f462756dbfe87f7881a74db8933bc2e790e2a5c7100fc56722509b7d6631d\"" Jul 14 21:58:31.902941 env[1319]: time="2025-07-14T21:58:31.902916584Z" level=info msg="StartContainer for \"1c5f462756dbfe87f7881a74db8933bc2e790e2a5c7100fc56722509b7d6631d\"" Jul 14 21:58:31.965919 env[1319]: time="2025-07-14T21:58:31.965875369Z" level=info msg="StartContainer for \"1c5f462756dbfe87f7881a74db8933bc2e790e2a5c7100fc56722509b7d6631d\" returns successfully" Jul 14 21:58:32.060457 kubelet[2191]: I0714 21:58:32.058680 2191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-547bf5c54b-8dw4g" podStartSLOduration=0.85350106 podStartE2EDuration="4.058663812s" podCreationTimestamp="2025-07-14 21:58:28 +0000 UTC" firstStartedPulling="2025-07-14 21:58:28.684751494 +0000 UTC m=+39.874355476" lastFinishedPulling="2025-07-14 21:58:31.889914206 +0000 UTC m=+43.079518228" observedRunningTime="2025-07-14 21:58:32.058045056 +0000 UTC m=+43.247649078" watchObservedRunningTime="2025-07-14 21:58:32.058663812 +0000 UTC m=+43.248267834" Jul 14 21:58:32.079000 audit[3919]: NETFILTER_CFG table=filter:105 family=2 entries=19 op=nft_register_rule pid=3919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:32.079000 audit[3919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe9f9edc0 a2=0 a3=1 items=0 ppid=2295 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:32.079000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:32.089000 audit[3919]: NETFILTER_CFG table=nat:106 family=2 entries=21 op=nft_register_chain pid=3919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:32.089000 audit[3919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7044 a0=3 a1=ffffe9f9edc0 a2=0 a3=1 items=0 ppid=2295 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:32.089000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:32.903004 env[1319]: time="2025-07-14T21:58:32.902958671Z" level=info msg="StopPodSandbox for \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\"" Jul 14 21:58:33.006421 env[1319]: 2025-07-14 21:58:32.967 [INFO][3931] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Jul 14 21:58:33.006421 env[1319]: 2025-07-14 21:58:32.967 [INFO][3931] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" iface="eth0" netns="/var/run/netns/cni-478a8ef4-f79e-da9e-5cec-9eeb47b06c2e" Jul 14 21:58:33.006421 env[1319]: 2025-07-14 21:58:32.967 [INFO][3931] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" iface="eth0" netns="/var/run/netns/cni-478a8ef4-f79e-da9e-5cec-9eeb47b06c2e" Jul 14 21:58:33.006421 env[1319]: 2025-07-14 21:58:32.967 [INFO][3931] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" iface="eth0" netns="/var/run/netns/cni-478a8ef4-f79e-da9e-5cec-9eeb47b06c2e" Jul 14 21:58:33.006421 env[1319]: 2025-07-14 21:58:32.967 [INFO][3931] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Jul 14 21:58:33.006421 env[1319]: 2025-07-14 21:58:32.967 [INFO][3931] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Jul 14 21:58:33.006421 env[1319]: 2025-07-14 21:58:32.986 [INFO][3940] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" HandleID="k8s-pod-network.34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Workload="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" Jul 14 21:58:33.006421 env[1319]: 2025-07-14 21:58:32.986 [INFO][3940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:33.006421 env[1319]: 2025-07-14 21:58:32.986 [INFO][3940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:33.006421 env[1319]: 2025-07-14 21:58:32.997 [WARNING][3940] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" HandleID="k8s-pod-network.34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Workload="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" Jul 14 21:58:33.006421 env[1319]: 2025-07-14 21:58:32.997 [INFO][3940] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" HandleID="k8s-pod-network.34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Workload="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" Jul 14 21:58:33.006421 env[1319]: 2025-07-14 21:58:33.000 [INFO][3940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:33.006421 env[1319]: 2025-07-14 21:58:33.002 [INFO][3931] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Jul 14 21:58:33.011151 env[1319]: time="2025-07-14T21:58:33.007704429Z" level=info msg="TearDown network for sandbox \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\" successfully" Jul 14 21:58:33.011151 env[1319]: time="2025-07-14T21:58:33.007746509Z" level=info msg="StopPodSandbox for \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\" returns successfully" Jul 14 21:58:33.009792 systemd[1]: run-netns-cni\x2d478a8ef4\x2df79e\x2dda9e\x2d5cec\x2d9eeb47b06c2e.mount: Deactivated successfully. Jul 14 21:58:33.012569 env[1319]: time="2025-07-14T21:58:33.012526200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67865bb6d5-jb527,Uid:c1a1b271-e606-49ed-b47b-b98b88fdbed2,Namespace:calico-system,Attempt:1,}" Jul 14 21:58:33.197735 systemd-networkd[1104]: cali3dc9a25054f: Link UP Jul 14 21:58:33.199865 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 14 21:58:33.199935 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3dc9a25054f: link becomes ready Jul 14 21:58:33.199713 systemd-networkd[1104]: cali3dc9a25054f: Gained carrier Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.139 [INFO][3948] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0 calico-kube-controllers-67865bb6d5- calico-system c1a1b271-e606-49ed-b47b-b98b88fdbed2 971 0 2025-07-14 21:58:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67865bb6d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-67865bb6d5-jb527 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3dc9a25054f [] [] }} ContainerID="1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" Namespace="calico-system" Pod="calico-kube-controllers-67865bb6d5-jb527" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-" Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.139 [INFO][3948] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" Namespace="calico-system" Pod="calico-kube-controllers-67865bb6d5-jb527" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.161 [INFO][3963] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" HandleID="k8s-pod-network.1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" Workload="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.161 [INFO][3963] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" HandleID="k8s-pod-network.1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" Workload="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd6b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-67865bb6d5-jb527", "timestamp":"2025-07-14 21:58:33.161603378 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.161 [INFO][3963] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.161 [INFO][3963] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.161 [INFO][3963] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.171 [INFO][3963] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" host="localhost" Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.176 [INFO][3963] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.180 [INFO][3963] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.181 [INFO][3963] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.183 [INFO][3963] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.183 [INFO][3963] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" host="localhost" Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.185 [INFO][3963] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4 Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.188 [INFO][3963] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" host="localhost" Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.193 [INFO][3963] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" host="localhost" Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.194 [INFO][3963] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" host="localhost" Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.194 [INFO][3963] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:33.211825 env[1319]: 2025-07-14 21:58:33.194 [INFO][3963] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" HandleID="k8s-pod-network.1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" Workload="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" Jul 14 21:58:33.212436 env[1319]: 2025-07-14 21:58:33.196 [INFO][3948] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" Namespace="calico-system" Pod="calico-kube-controllers-67865bb6d5-jb527" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0", GenerateName:"calico-kube-controllers-67865bb6d5-", Namespace:"calico-system", SelfLink:"", UID:"c1a1b271-e606-49ed-b47b-b98b88fdbed2", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67865bb6d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-67865bb6d5-jb527", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3dc9a25054f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:33.212436 env[1319]: 2025-07-14 21:58:33.196 [INFO][3948] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" Namespace="calico-system" Pod="calico-kube-controllers-67865bb6d5-jb527" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" Jul 14 21:58:33.212436 env[1319]: 2025-07-14 21:58:33.196 [INFO][3948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3dc9a25054f ContainerID="1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" Namespace="calico-system" Pod="calico-kube-controllers-67865bb6d5-jb527" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" Jul 14 21:58:33.212436 env[1319]: 2025-07-14 21:58:33.199 [INFO][3948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" Namespace="calico-system" Pod="calico-kube-controllers-67865bb6d5-jb527" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" Jul 14 21:58:33.212436 env[1319]: 2025-07-14 21:58:33.200 [INFO][3948] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" Namespace="calico-system" Pod="calico-kube-controllers-67865bb6d5-jb527" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0", GenerateName:"calico-kube-controllers-67865bb6d5-", Namespace:"calico-system", SelfLink:"", UID:"c1a1b271-e606-49ed-b47b-b98b88fdbed2", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67865bb6d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4", Pod:"calico-kube-controllers-67865bb6d5-jb527", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3dc9a25054f", MAC:"ce:95:bd:82:1d:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:33.212436 env[1319]: 2025-07-14 21:58:33.209 [INFO][3948] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4" Namespace="calico-system" Pod="calico-kube-controllers-67865bb6d5-jb527" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" Jul 14 21:58:33.218000 audit[3981]: NETFILTER_CFG table=filter:107 family=2 entries=36 op=nft_register_chain pid=3981 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 21:58:33.218000 audit[3981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffec3b4e20 a2=0 a3=ffff8ab1ffa8 items=0 ppid=3588 pid=3981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:33.218000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 21:58:33.224313 env[1319]: time="2025-07-14T21:58:33.224235679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:58:33.224452 env[1319]: time="2025-07-14T21:58:33.224327478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:58:33.224452 env[1319]: time="2025-07-14T21:58:33.224356118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:58:33.224607 env[1319]: time="2025-07-14T21:58:33.224519237Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4 pid=3990 runtime=io.containerd.runc.v2 Jul 14 21:58:33.258191 systemd-resolved[1240]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:58:33.275692 env[1319]: time="2025-07-14T21:58:33.275644328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67865bb6d5-jb527,Uid:c1a1b271-e606-49ed-b47b-b98b88fdbed2,Namespace:calico-system,Attempt:1,} returns sandbox id \"1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4\"" Jul 14 21:58:33.277770 env[1319]: time="2025-07-14T21:58:33.277714955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 14 21:58:33.902778 env[1319]: time="2025-07-14T21:58:33.902735534Z" level=info msg="StopPodSandbox for \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\"" Jul 14 21:58:33.903056 env[1319]: time="2025-07-14T21:58:33.903030092Z" level=info msg="StopPodSandbox for \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\"" Jul 14 21:58:33.993716 env[1319]: 2025-07-14 21:58:33.947 [INFO][4048] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Jul 14 21:58:33.993716 env[1319]: 2025-07-14 21:58:33.947 [INFO][4048] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" iface="eth0" netns="/var/run/netns/cni-ee15384b-77f5-62df-e580-f37dc4d885ce" Jul 14 21:58:33.993716 env[1319]: 2025-07-14 21:58:33.947 [INFO][4048] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" iface="eth0" netns="/var/run/netns/cni-ee15384b-77f5-62df-e580-f37dc4d885ce" Jul 14 21:58:33.993716 env[1319]: 2025-07-14 21:58:33.947 [INFO][4048] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" iface="eth0" netns="/var/run/netns/cni-ee15384b-77f5-62df-e580-f37dc4d885ce" Jul 14 21:58:33.993716 env[1319]: 2025-07-14 21:58:33.947 [INFO][4048] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Jul 14 21:58:33.993716 env[1319]: 2025-07-14 21:58:33.947 [INFO][4048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Jul 14 21:58:33.993716 env[1319]: 2025-07-14 21:58:33.972 [INFO][4062] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" HandleID="k8s-pod-network.4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Workload="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" Jul 14 21:58:33.993716 env[1319]: 2025-07-14 21:58:33.972 [INFO][4062] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:33.993716 env[1319]: 2025-07-14 21:58:33.972 [INFO][4062] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:33.993716 env[1319]: 2025-07-14 21:58:33.983 [WARNING][4062] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" HandleID="k8s-pod-network.4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Workload="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" Jul 14 21:58:33.993716 env[1319]: 2025-07-14 21:58:33.983 [INFO][4062] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" HandleID="k8s-pod-network.4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Workload="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" Jul 14 21:58:33.993716 env[1319]: 2025-07-14 21:58:33.984 [INFO][4062] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:33.993716 env[1319]: 2025-07-14 21:58:33.992 [INFO][4048] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Jul 14 21:58:33.999349 systemd[1]: run-netns-cni\x2dee15384b\x2d77f5\x2d62df\x2de580\x2df37dc4d885ce.mount: Deactivated successfully. Jul 14 21:58:33.999738 env[1319]: time="2025-07-14T21:58:33.999689147Z" level=info msg="TearDown network for sandbox \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\" successfully" Jul 14 21:58:33.999908 env[1319]: time="2025-07-14T21:58:33.999887666Z" level=info msg="StopPodSandbox for \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\" returns successfully" Jul 14 21:58:34.001004 env[1319]: time="2025-07-14T21:58:34.000973499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79b975cf4d-9xgmn,Uid:e1c592a9-3faf-4978-af8d-8d83292a3475,Namespace:calico-apiserver,Attempt:1,}" Jul 14 21:58:34.002248 env[1319]: 2025-07-14 21:58:33.949 [INFO][4047] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Jul 14 21:58:34.002248 env[1319]: 2025-07-14 21:58:33.949 [INFO][4047] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" iface="eth0" netns="/var/run/netns/cni-03eeb042-b250-e993-a267-f8506e6ba5e9" Jul 14 21:58:34.002248 env[1319]: 2025-07-14 21:58:33.949 [INFO][4047] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" iface="eth0" netns="/var/run/netns/cni-03eeb042-b250-e993-a267-f8506e6ba5e9" Jul 14 21:58:34.002248 env[1319]: 2025-07-14 21:58:33.949 [INFO][4047] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" iface="eth0" netns="/var/run/netns/cni-03eeb042-b250-e993-a267-f8506e6ba5e9" Jul 14 21:58:34.002248 env[1319]: 2025-07-14 21:58:33.949 [INFO][4047] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Jul 14 21:58:34.002248 env[1319]: 2025-07-14 21:58:33.949 [INFO][4047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Jul 14 21:58:34.002248 env[1319]: 2025-07-14 21:58:33.982 [INFO][4064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" HandleID="k8s-pod-network.4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Workload="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" Jul 14 21:58:34.002248 env[1319]: 2025-07-14 21:58:33.982 [INFO][4064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:34.002248 env[1319]: 2025-07-14 21:58:33.984 [INFO][4064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:34.002248 env[1319]: 2025-07-14 21:58:33.992 [WARNING][4064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" HandleID="k8s-pod-network.4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Workload="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" Jul 14 21:58:34.002248 env[1319]: 2025-07-14 21:58:33.992 [INFO][4064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" HandleID="k8s-pod-network.4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Workload="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" Jul 14 21:58:34.002248 env[1319]: 2025-07-14 21:58:33.993 [INFO][4064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:34.002248 env[1319]: 2025-07-14 21:58:33.996 [INFO][4047] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Jul 14 21:58:34.003724 env[1319]: time="2025-07-14T21:58:34.003690925Z" level=info msg="TearDown network for sandbox \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\" successfully" Jul 14 21:58:34.003803 env[1319]: time="2025-07-14T21:58:34.003786884Z" level=info msg="StopPodSandbox for \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\" returns successfully" Jul 14 21:58:34.004831 kubelet[2191]: E0714 21:58:34.004798 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:58:34.006251 env[1319]: time="2025-07-14T21:58:34.005181437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gzbm6,Uid:a8fc1316-6b04-4d95-89ba-2535a5175aa9,Namespace:kube-system,Attempt:1,}" Jul 14 21:58:34.005671 systemd[1]: run-netns-cni\x2d03eeb042\x2db250\x2de993\x2da267\x2df8506e6ba5e9.mount: Deactivated successfully. Jul 14 21:58:34.126622 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali298ffd041b9: link becomes ready Jul 14 21:58:34.126265 systemd-networkd[1104]: cali298ffd041b9: Link UP Jul 14 21:58:34.128792 systemd-networkd[1104]: cali298ffd041b9: Gained carrier Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.060 [INFO][4090] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0 coredns-7c65d6cfc9- kube-system a8fc1316-6b04-4d95-89ba-2535a5175aa9 979 0 2025-07-14 21:57:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-gzbm6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali298ffd041b9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gzbm6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gzbm6-" Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.060 [INFO][4090] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gzbm6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.084 [INFO][4114] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" HandleID="k8s-pod-network.c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" Workload="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.084 [INFO][4114] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" HandleID="k8s-pod-network.c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" Workload="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137720), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-gzbm6", "timestamp":"2025-07-14 21:58:34.084197227 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.084 [INFO][4114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.084 [INFO][4114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.084 [INFO][4114] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.092 [INFO][4114] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" host="localhost" Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.097 [INFO][4114] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.101 [INFO][4114] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.102 [INFO][4114] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.104 [INFO][4114] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.104 [INFO][4114] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" host="localhost" Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.106 [INFO][4114] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4 Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.109 [INFO][4114] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" host="localhost" Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.114 [INFO][4114] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" host="localhost" Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.114 [INFO][4114] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" host="localhost" Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.114 [INFO][4114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:34.137672 env[1319]: 2025-07-14 21:58:34.114 [INFO][4114] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" HandleID="k8s-pod-network.c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" Workload="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" Jul 14 21:58:34.138408 env[1319]: 2025-07-14 21:58:34.117 [INFO][4090] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gzbm6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a8fc1316-6b04-4d95-89ba-2535a5175aa9", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-gzbm6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali298ffd041b9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:34.138408 env[1319]: 2025-07-14 21:58:34.117 [INFO][4090] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gzbm6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" Jul 14 21:58:34.138408 env[1319]: 2025-07-14 21:58:34.117 [INFO][4090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali298ffd041b9 ContainerID="c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gzbm6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" Jul 14 21:58:34.138408 env[1319]: 2025-07-14 21:58:34.126 [INFO][4090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gzbm6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" Jul 14 21:58:34.138408 env[1319]: 2025-07-14 21:58:34.127 [INFO][4090] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gzbm6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a8fc1316-6b04-4d95-89ba-2535a5175aa9", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4", Pod:"coredns-7c65d6cfc9-gzbm6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali298ffd041b9", MAC:"8a:c5:99:80:56:4d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:34.138408 env[1319]: 2025-07-14 21:58:34.136 [INFO][4090] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gzbm6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" Jul 14 21:58:34.146604 env[1319]: time="2025-07-14T21:58:34.146515024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:58:34.146604 env[1319]: time="2025-07-14T21:58:34.146564504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:58:34.146604 env[1319]: time="2025-07-14T21:58:34.146574864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:58:34.146922 env[1319]: time="2025-07-14T21:58:34.146868142Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4 pid=4145 runtime=io.containerd.runc.v2 Jul 14 21:58:34.147000 audit[4153]: NETFILTER_CFG table=filter:108 family=2 entries=46 op=nft_register_chain pid=4153 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 21:58:34.149840 kernel: kauditd_printk_skb: 556 callbacks suppressed Jul 14 21:58:34.149905 kernel: audit: type=1325 audit(1752530314.147:424): table=filter:108 family=2 entries=46 op=nft_register_chain pid=4153 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 21:58:34.147000 audit[4153]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23740 a0=3 a1=ffffc87d8b50 a2=0 a3=fffface40fa8 items=0 ppid=3588 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:34.153872 kernel: audit: type=1300 audit(1752530314.147:424): arch=c00000b7 syscall=211 success=yes exit=23740 a0=3 a1=ffffc87d8b50 a2=0 a3=fffface40fa8 items=0 ppid=3588 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:34.154001 kernel: audit: type=1327 audit(1752530314.147:424): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 21:58:34.147000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 21:58:34.181644 systemd-resolved[1240]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:58:34.201842 env[1319]: time="2025-07-14T21:58:34.201788497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gzbm6,Uid:a8fc1316-6b04-4d95-89ba-2535a5175aa9,Namespace:kube-system,Attempt:1,} returns sandbox id \"c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4\"" Jul 14 21:58:34.202565 kubelet[2191]: E0714 21:58:34.202544 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:58:34.204276 env[1319]: time="2025-07-14T21:58:34.204243845Z" level=info msg="CreateContainer within sandbox \"c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:58:34.224303 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 14 21:58:34.224389 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic57ebfb3bf1: link becomes ready Jul 14 21:58:34.222597 systemd-networkd[1104]: calic57ebfb3bf1: Link UP Jul 14 21:58:34.224351 systemd-networkd[1104]: calic57ebfb3bf1: Gained carrier Jul 14 21:58:34.230988 env[1319]: time="2025-07-14T21:58:34.230225750Z" level=info msg="CreateContainer within sandbox \"c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9fb98df045823fab82e505052217fe2cd18043e09f27674f74285b37df88fd81\"" Jul 14 21:58:34.233448 env[1319]: time="2025-07-14T21:58:34.231576103Z" level=info msg="StartContainer for \"9fb98df045823fab82e505052217fe2cd18043e09f27674f74285b37df88fd81\"" Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.062 [INFO][4091] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0 calico-apiserver-79b975cf4d- calico-apiserver e1c592a9-3faf-4978-af8d-8d83292a3475 978 0 2025-07-14 21:58:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79b975cf4d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79b975cf4d-9xgmn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic57ebfb3bf1 [] [] }} ContainerID="49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" Namespace="calico-apiserver" Pod="calico-apiserver-79b975cf4d-9xgmn" WorkloadEndpoint="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-" Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.062 [INFO][4091] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" Namespace="calico-apiserver" Pod="calico-apiserver-79b975cf4d-9xgmn" WorkloadEndpoint="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.099 [INFO][4120] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" HandleID="k8s-pod-network.49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" Workload="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.099 [INFO][4120] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" HandleID="k8s-pod-network.49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" Workload="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000510200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79b975cf4d-9xgmn", "timestamp":"2025-07-14 21:58:34.099641467 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.099 [INFO][4120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.114 [INFO][4120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.114 [INFO][4120] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.193 [INFO][4120] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" host="localhost" Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.197 [INFO][4120] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.202 [INFO][4120] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.205 [INFO][4120] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.207 [INFO][4120] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.207 [INFO][4120] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" host="localhost" Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.209 [INFO][4120] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982 Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.213 [INFO][4120] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" host="localhost" Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.218 [INFO][4120] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" host="localhost" Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.218 [INFO][4120] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" host="localhost" Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.218 [INFO][4120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:34.237359 env[1319]: 2025-07-14 21:58:34.218 [INFO][4120] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" HandleID="k8s-pod-network.49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" Workload="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" Jul 14 21:58:34.238193 env[1319]: 2025-07-14 21:58:34.220 [INFO][4091] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" Namespace="calico-apiserver" Pod="calico-apiserver-79b975cf4d-9xgmn" WorkloadEndpoint="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0", GenerateName:"calico-apiserver-79b975cf4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1c592a9-3faf-4978-af8d-8d83292a3475", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79b975cf4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79b975cf4d-9xgmn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic57ebfb3bf1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:34.238193 env[1319]: 2025-07-14 21:58:34.220 [INFO][4091] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" Namespace="calico-apiserver" Pod="calico-apiserver-79b975cf4d-9xgmn" WorkloadEndpoint="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" Jul 14 21:58:34.238193 env[1319]: 2025-07-14 21:58:34.220 [INFO][4091] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic57ebfb3bf1 ContainerID="49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" Namespace="calico-apiserver" Pod="calico-apiserver-79b975cf4d-9xgmn" WorkloadEndpoint="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" Jul 14 21:58:34.238193 env[1319]: 2025-07-14 21:58:34.225 [INFO][4091] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" Namespace="calico-apiserver" Pod="calico-apiserver-79b975cf4d-9xgmn" WorkloadEndpoint="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" Jul 14 21:58:34.238193 env[1319]: 2025-07-14 21:58:34.225 [INFO][4091] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" Namespace="calico-apiserver" Pod="calico-apiserver-79b975cf4d-9xgmn" WorkloadEndpoint="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0", GenerateName:"calico-apiserver-79b975cf4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1c592a9-3faf-4978-af8d-8d83292a3475", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79b975cf4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982", Pod:"calico-apiserver-79b975cf4d-9xgmn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic57ebfb3bf1", MAC:"b2:95:67:1e:ad:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:34.238193 env[1319]: 2025-07-14 21:58:34.234 [INFO][4091] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982" Namespace="calico-apiserver" Pod="calico-apiserver-79b975cf4d-9xgmn" WorkloadEndpoint="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" Jul 14 21:58:34.249000 audit[4202]: NETFILTER_CFG table=filter:109 family=2 entries=58 op=nft_register_chain pid=4202 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 21:58:34.249000 audit[4202]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30584 a0=3 a1=ffffea348710 a2=0 a3=ffff958acfa8 items=0 ppid=3588 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:34.257062 kernel: audit: type=1325 audit(1752530314.249:425): table=filter:109 family=2 entries=58 op=nft_register_chain pid=4202 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 21:58:34.257120 kernel: audit: type=1300 audit(1752530314.249:425): arch=c00000b7 syscall=211 success=yes exit=30584 a0=3 a1=ffffea348710 a2=0 a3=ffff958acfa8 items=0 ppid=3588 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:34.257140 kernel: audit: type=1327 audit(1752530314.249:425): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 21:58:34.249000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 21:58:34.261186 env[1319]: time="2025-07-14T21:58:34.261123829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:58:34.261383 env[1319]: time="2025-07-14T21:58:34.261335628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:58:34.261480 env[1319]: time="2025-07-14T21:58:34.261459028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:58:34.261795 env[1319]: time="2025-07-14T21:58:34.261738386Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982 pid=4209 runtime=io.containerd.runc.v2 Jul 14 21:58:34.293258 systemd-resolved[1240]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:58:34.299212 env[1319]: time="2025-07-14T21:58:34.299164272Z" level=info msg="StartContainer for \"9fb98df045823fab82e505052217fe2cd18043e09f27674f74285b37df88fd81\" returns successfully" Jul 14 21:58:34.319580 env[1319]: time="2025-07-14T21:58:34.319468447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79b975cf4d-9xgmn,Uid:e1c592a9-3faf-4978-af8d-8d83292a3475,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982\"" Jul 14 21:58:34.903434 env[1319]: time="2025-07-14T21:58:34.903395058Z" level=info msg="StopPodSandbox for \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\"" Jul 14 21:58:34.904343 env[1319]: time="2025-07-14T21:58:34.904303893Z" level=info msg="StopPodSandbox for \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\"" Jul 14 21:58:34.904477 env[1319]: time="2025-07-14T21:58:34.904452013Z" level=info msg="StopPodSandbox for \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\"" Jul 14 21:58:35.026433 env[1319]: 2025-07-14 21:58:34.969 [INFO][4296] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Jul 14 21:58:35.026433 env[1319]: 2025-07-14 21:58:34.969 [INFO][4296] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" iface="eth0" netns="/var/run/netns/cni-b3819c87-1bf4-51b4-dee6-ec1f31d6fa74" Jul 14 21:58:35.026433 env[1319]: 2025-07-14 21:58:34.969 [INFO][4296] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" iface="eth0" netns="/var/run/netns/cni-b3819c87-1bf4-51b4-dee6-ec1f31d6fa74" Jul 14 21:58:35.026433 env[1319]: 2025-07-14 21:58:34.969 [INFO][4296] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" iface="eth0" netns="/var/run/netns/cni-b3819c87-1bf4-51b4-dee6-ec1f31d6fa74" Jul 14 21:58:35.026433 env[1319]: 2025-07-14 21:58:34.969 [INFO][4296] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Jul 14 21:58:35.026433 env[1319]: 2025-07-14 21:58:34.969 [INFO][4296] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Jul 14 21:58:35.026433 env[1319]: 2025-07-14 21:58:35.007 [INFO][4324] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" HandleID="k8s-pod-network.bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Workload="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" Jul 14 21:58:35.026433 env[1319]: 2025-07-14 21:58:35.007 [INFO][4324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:35.026433 env[1319]: 2025-07-14 21:58:35.007 [INFO][4324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:35.026433 env[1319]: 2025-07-14 21:58:35.020 [WARNING][4324] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" HandleID="k8s-pod-network.bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Workload="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" Jul 14 21:58:35.026433 env[1319]: 2025-07-14 21:58:35.020 [INFO][4324] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" HandleID="k8s-pod-network.bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Workload="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" Jul 14 21:58:35.026433 env[1319]: 2025-07-14 21:58:35.023 [INFO][4324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:35.026433 env[1319]: 2025-07-14 21:58:35.024 [INFO][4296] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Jul 14 21:58:35.031534 systemd[1]: run-netns-cni\x2db3819c87\x2d1bf4\x2d51b4\x2ddee6\x2dec1f31d6fa74.mount: Deactivated successfully. Jul 14 21:58:35.032917 env[1319]: time="2025-07-14T21:58:35.032873933Z" level=info msg="TearDown network for sandbox \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\" successfully" Jul 14 21:58:35.033071 env[1319]: time="2025-07-14T21:58:35.033051532Z" level=info msg="StopPodSandbox for \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\" returns successfully" Jul 14 21:58:35.033936 env[1319]: time="2025-07-14T21:58:35.033891568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79b975cf4d-tvnn9,Uid:6e01068b-a03a-4c0c-99a9-7e9275cb210b,Namespace:calico-apiserver,Attempt:1,}" Jul 14 21:58:35.060069 env[1319]: 2025-07-14 21:58:34.992 [INFO][4312] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Jul 14 21:58:35.060069 env[1319]: 2025-07-14 21:58:34.992 [INFO][4312] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" iface="eth0" netns="/var/run/netns/cni-304af03d-bfcd-c6b4-b25d-3407d43c812c" Jul 14 21:58:35.060069 env[1319]: 2025-07-14 21:58:34.992 [INFO][4312] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" iface="eth0" netns="/var/run/netns/cni-304af03d-bfcd-c6b4-b25d-3407d43c812c" Jul 14 21:58:35.060069 env[1319]: 2025-07-14 21:58:34.992 [INFO][4312] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" iface="eth0" netns="/var/run/netns/cni-304af03d-bfcd-c6b4-b25d-3407d43c812c" Jul 14 21:58:35.060069 env[1319]: 2025-07-14 21:58:34.992 [INFO][4312] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Jul 14 21:58:35.060069 env[1319]: 2025-07-14 21:58:34.992 [INFO][4312] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Jul 14 21:58:35.060069 env[1319]: 2025-07-14 21:58:35.028 [INFO][4339] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" HandleID="k8s-pod-network.d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Workload="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" Jul 14 21:58:35.060069 env[1319]: 2025-07-14 21:58:35.028 [INFO][4339] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:35.060069 env[1319]: 2025-07-14 21:58:35.029 [INFO][4339] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:35.060069 env[1319]: 2025-07-14 21:58:35.040 [WARNING][4339] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" HandleID="k8s-pod-network.d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Workload="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" Jul 14 21:58:35.060069 env[1319]: 2025-07-14 21:58:35.040 [INFO][4339] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" HandleID="k8s-pod-network.d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Workload="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" Jul 14 21:58:35.060069 env[1319]: 2025-07-14 21:58:35.043 [INFO][4339] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:35.060069 env[1319]: 2025-07-14 21:58:35.055 [INFO][4312] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Jul 14 21:58:35.062878 env[1319]: time="2025-07-14T21:58:35.060171934Z" level=info msg="TearDown network for sandbox \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\" successfully" Jul 14 21:58:35.062878 env[1319]: time="2025-07-14T21:58:35.060200214Z" level=info msg="StopPodSandbox for \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\" returns successfully" Jul 14 21:58:35.062878 env[1319]: time="2025-07-14T21:58:35.061686767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-8mddv,Uid:0fb397a8-167c-4a3c-b754-5643d7b757de,Namespace:calico-system,Attempt:1,}" Jul 14 21:58:35.062393 systemd[1]: run-netns-cni\x2d304af03d\x2dbfcd\x2dc6b4\x2db25d\x2d3407d43c812c.mount: Deactivated successfully. Jul 14 21:58:35.065784 kubelet[2191]: E0714 21:58:35.064500 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:58:35.081710 kubelet[2191]: I0714 21:58:35.081469 2191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gzbm6" podStartSLOduration=39.081452641 podStartE2EDuration="39.081452641s" podCreationTimestamp="2025-07-14 21:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:58:35.079168931 +0000 UTC m=+46.268772953" watchObservedRunningTime="2025-07-14 21:58:35.081452641 +0000 UTC m=+46.271056623" Jul 14 21:58:35.082075 env[1319]: 2025-07-14 21:58:34.976 [INFO][4298] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Jul 14 21:58:35.082075 env[1319]: 2025-07-14 21:58:34.976 [INFO][4298] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" iface="eth0" netns="/var/run/netns/cni-7bc1a13b-5ab1-3adb-eb44-67d9a33d85ce" Jul 14 21:58:35.082075 env[1319]: 2025-07-14 21:58:34.976 [INFO][4298] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" iface="eth0" netns="/var/run/netns/cni-7bc1a13b-5ab1-3adb-eb44-67d9a33d85ce" Jul 14 21:58:35.082075 env[1319]: 2025-07-14 21:58:34.977 [INFO][4298] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" iface="eth0" netns="/var/run/netns/cni-7bc1a13b-5ab1-3adb-eb44-67d9a33d85ce" Jul 14 21:58:35.082075 env[1319]: 2025-07-14 21:58:34.977 [INFO][4298] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Jul 14 21:58:35.082075 env[1319]: 2025-07-14 21:58:34.977 [INFO][4298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Jul 14 21:58:35.082075 env[1319]: 2025-07-14 21:58:35.012 [INFO][4331] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" HandleID="k8s-pod-network.419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Workload="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" Jul 14 21:58:35.082075 env[1319]: 2025-07-14 21:58:35.012 [INFO][4331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:35.082075 env[1319]: 2025-07-14 21:58:35.042 [INFO][4331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:35.082075 env[1319]: 2025-07-14 21:58:35.056 [WARNING][4331] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" HandleID="k8s-pod-network.419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Workload="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" Jul 14 21:58:35.082075 env[1319]: 2025-07-14 21:58:35.056 [INFO][4331] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" HandleID="k8s-pod-network.419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Workload="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" Jul 14 21:58:35.082075 env[1319]: 2025-07-14 21:58:35.057 [INFO][4331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:35.082075 env[1319]: 2025-07-14 21:58:35.068 [INFO][4298] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Jul 14 21:58:35.088330 env[1319]: time="2025-07-14T21:58:35.087764054Z" level=info msg="TearDown network for sandbox \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\" successfully" Jul 14 21:58:35.088330 env[1319]: time="2025-07-14T21:58:35.087825534Z" level=info msg="StopPodSandbox for \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\" returns successfully" Jul 14 21:58:35.088450 kubelet[2191]: E0714 21:58:35.088237 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:58:35.088926 env[1319]: time="2025-07-14T21:58:35.088899729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tbjx5,Uid:7023e1db-2106-48dc-85a1-3f1e832bd4ba,Namespace:kube-system,Attempt:1,}" Jul 14 21:58:35.107000 audit[4375]: NETFILTER_CFG table=filter:110 family=2 entries=18 op=nft_register_rule pid=4375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:35.107000 audit[4375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff93af2e0 a2=0 a3=1 items=0 ppid=2295 pid=4375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:35.113790 kernel: audit: type=1325 audit(1752530315.107:426): table=filter:110 family=2 entries=18 op=nft_register_rule pid=4375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:35.113877 kernel: audit: type=1300 audit(1752530315.107:426): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff93af2e0 a2=0 a3=1 items=0 ppid=2295 pid=4375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:35.113901 kernel: audit: type=1327 audit(1752530315.107:426): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:35.107000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:35.115000 audit[4375]: NETFILTER_CFG table=nat:111 family=2 entries=16 op=nft_register_rule pid=4375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:35.115000 audit[4375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4236 a0=3 a1=fffff93af2e0 a2=0 a3=1 items=0 ppid=2295 pid=4375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:35.115000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:35.119606 kernel: audit: type=1325 audit(1752530315.115:427): table=nat:111 family=2 entries=16 op=nft_register_rule pid=4375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:35.133000 audit[4393]: NETFILTER_CFG table=filter:112 family=2 entries=15 op=nft_register_rule pid=4393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:35.133000 audit[4393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffc688c930 a2=0 a3=1 items=0 ppid=2295 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:35.133000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:35.140353 systemd-networkd[1104]: cali3dc9a25054f: Gained IPv6LL Jul 14 21:58:35.143000 audit[4393]: NETFILTER_CFG table=nat:113 family=2 entries=37 op=nft_register_chain pid=4393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:35.143000 audit[4393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14964 a0=3 a1=ffffc688c930 a2=0 a3=1 items=0 ppid=2295 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:35.143000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:35.204310 systemd-networkd[1104]: cali97129d37a07: Link UP Jul 14 21:58:35.205600 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali97129d37a07: link becomes ready Jul 14 21:58:35.205755 systemd-networkd[1104]: cali97129d37a07: Gained carrier Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.101 [INFO][4351] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0 calico-apiserver-79b975cf4d- calico-apiserver 6e01068b-a03a-4c0c-99a9-7e9275cb210b 998 0 2025-07-14 21:58:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79b975cf4d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79b975cf4d-tvnn9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali97129d37a07 [] [] }} ContainerID="384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" Namespace="calico-apiserver" Pod="calico-apiserver-79b975cf4d-tvnn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-" Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.101 [INFO][4351] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" Namespace="calico-apiserver" Pod="calico-apiserver-79b975cf4d-tvnn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.142 [INFO][4377] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" HandleID="k8s-pod-network.384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" Workload="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.142 [INFO][4377] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" HandleID="k8s-pod-network.384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" Workload="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a27c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79b975cf4d-tvnn9", "timestamp":"2025-07-14 21:58:35.142335136 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.142 [INFO][4377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.143 [INFO][4377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.143 [INFO][4377] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.152 [INFO][4377] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" host="localhost" Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.157 [INFO][4377] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.161 [INFO][4377] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.163 [INFO][4377] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.166 [INFO][4377] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.166 [INFO][4377] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" host="localhost" Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.167 [INFO][4377] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2 Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.180 [INFO][4377] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" host="localhost" Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.187 [INFO][4377] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" host="localhost" Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.187 [INFO][4377] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" host="localhost" Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.187 [INFO][4377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:35.221923 env[1319]: 2025-07-14 21:58:35.187 [INFO][4377] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" HandleID="k8s-pod-network.384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" Workload="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" Jul 14 21:58:35.222558 env[1319]: 2025-07-14 21:58:35.195 [INFO][4351] cni-plugin/k8s.go 418: Populated endpoint ContainerID="384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" Namespace="calico-apiserver" Pod="calico-apiserver-79b975cf4d-tvnn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0", GenerateName:"calico-apiserver-79b975cf4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e01068b-a03a-4c0c-99a9-7e9275cb210b", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79b975cf4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79b975cf4d-tvnn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97129d37a07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:35.222558 env[1319]: 2025-07-14 21:58:35.195 [INFO][4351] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" Namespace="calico-apiserver" Pod="calico-apiserver-79b975cf4d-tvnn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" Jul 14 21:58:35.222558 env[1319]: 2025-07-14 21:58:35.195 [INFO][4351] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97129d37a07 ContainerID="384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" Namespace="calico-apiserver" Pod="calico-apiserver-79b975cf4d-tvnn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" Jul 14 21:58:35.222558 env[1319]: 2025-07-14 21:58:35.206 [INFO][4351] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" Namespace="calico-apiserver" Pod="calico-apiserver-79b975cf4d-tvnn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" Jul 14 21:58:35.222558 env[1319]: 2025-07-14 21:58:35.207 [INFO][4351] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" Namespace="calico-apiserver" Pod="calico-apiserver-79b975cf4d-tvnn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0", GenerateName:"calico-apiserver-79b975cf4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e01068b-a03a-4c0c-99a9-7e9275cb210b", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79b975cf4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2", Pod:"calico-apiserver-79b975cf4d-tvnn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97129d37a07", MAC:"7e:d1:02:86:03:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:35.222558 env[1319]: 2025-07-14 21:58:35.220 [INFO][4351] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2" Namespace="calico-apiserver" Pod="calico-apiserver-79b975cf4d-tvnn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" Jul 14 21:58:35.235000 audit[4429]: NETFILTER_CFG table=filter:114 family=2 entries=49 op=nft_register_chain pid=4429 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 21:58:35.235000 audit[4429]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25452 a0=3 a1=fffff77a27b0 a2=0 a3=ffffab4defa8 items=0 ppid=3588 pid=4429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:35.235000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 21:58:35.240643 env[1319]: time="2025-07-14T21:58:35.240552149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:58:35.240800 env[1319]: time="2025-07-14T21:58:35.240649189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:58:35.240800 env[1319]: time="2025-07-14T21:58:35.240670789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:58:35.240870 env[1319]: time="2025-07-14T21:58:35.240822868Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2 pid=4434 runtime=io.containerd.runc.v2 Jul 14 21:58:35.286128 systemd-resolved[1240]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:58:35.337248 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 14 21:58:35.337346 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6ba182e0ac7: link becomes ready Jul 14 21:58:35.329142 systemd-networkd[1104]: cali6ba182e0ac7: Link UP Jul 14 21:58:35.331357 systemd-networkd[1104]: cali6ba182e0ac7: Gained carrier Jul 14 21:58:35.332255 systemd-networkd[1104]: cali298ffd041b9: Gained IPv6LL Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.138 [INFO][4362] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--8mddv-eth0 goldmane-58fd7646b9- calico-system 0fb397a8-167c-4a3c-b754-5643d7b757de 1000 0 2025-07-14 21:58:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-8mddv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6ba182e0ac7 [] [] }} ContainerID="a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" Namespace="calico-system" Pod="goldmane-58fd7646b9-8mddv" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--8mddv-" Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.139 [INFO][4362] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" Namespace="calico-system" Pod="goldmane-58fd7646b9-8mddv" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.188 [INFO][4397] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" HandleID="k8s-pod-network.a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" Workload="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.188 [INFO][4397] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" HandleID="k8s-pod-network.a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" Workload="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-8mddv", "timestamp":"2025-07-14 21:58:35.188140497 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.189 [INFO][4397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.189 [INFO][4397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.189 [INFO][4397] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.252 [INFO][4397] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" host="localhost" Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.260 [INFO][4397] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.266 [INFO][4397] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.268 [INFO][4397] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.270 [INFO][4397] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.270 [INFO][4397] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" host="localhost" Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.271 [INFO][4397] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2 Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.275 [INFO][4397] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" host="localhost" Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.281 [INFO][4397] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" host="localhost" Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.281 [INFO][4397] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" host="localhost" Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.281 [INFO][4397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:35.348098 env[1319]: 2025-07-14 21:58:35.281 [INFO][4397] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" HandleID="k8s-pod-network.a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" Workload="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" Jul 14 21:58:35.348744 env[1319]: 2025-07-14 21:58:35.312 [INFO][4362] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" Namespace="calico-system" Pod="goldmane-58fd7646b9-8mddv" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--8mddv-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"0fb397a8-167c-4a3c-b754-5643d7b757de", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-8mddv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6ba182e0ac7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:35.348744 env[1319]: 2025-07-14 21:58:35.312 [INFO][4362] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" Namespace="calico-system" Pod="goldmane-58fd7646b9-8mddv" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" Jul 14 21:58:35.348744 env[1319]: 2025-07-14 21:58:35.313 [INFO][4362] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ba182e0ac7 ContainerID="a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" Namespace="calico-system" Pod="goldmane-58fd7646b9-8mddv" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" Jul 14 21:58:35.348744 env[1319]: 2025-07-14 21:58:35.333 [INFO][4362] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" Namespace="calico-system" Pod="goldmane-58fd7646b9-8mddv" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" Jul 14 21:58:35.348744 env[1319]: 2025-07-14 21:58:35.333 [INFO][4362] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" Namespace="calico-system" Pod="goldmane-58fd7646b9-8mddv" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--8mddv-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"0fb397a8-167c-4a3c-b754-5643d7b757de", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2", Pod:"goldmane-58fd7646b9-8mddv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6ba182e0ac7", MAC:"5e:fb:93:01:93:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:35.348744 env[1319]: 2025-07-14 21:58:35.345 [INFO][4362] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2" Namespace="calico-system" Pod="goldmane-58fd7646b9-8mddv" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" Jul 14 21:58:35.358988 systemd[1]: run-netns-cni\x2d7bc1a13b\x2d5ab1\x2d3adb\x2deb44\x2d67d9a33d85ce.mount: Deactivated successfully. Jul 14 21:58:35.362428 env[1319]: time="2025-07-14T21:58:35.362389139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79b975cf4d-tvnn9,Uid:6e01068b-a03a-4c0c-99a9-7e9275cb210b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2\"" Jul 14 21:58:35.361000 audit[4478]: NETFILTER_CFG table=filter:115 family=2 entries=60 op=nft_register_chain pid=4478 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 21:58:35.361000 audit[4478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29932 a0=3 a1=ffffdc0f30d0 a2=0 a3=ffffb5c0afa8 items=0 ppid=3588 pid=4478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:35.361000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 21:58:35.379077 env[1319]: time="2025-07-14T21:58:35.379014587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:58:35.379077 env[1319]: time="2025-07-14T21:58:35.379053587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:58:35.379398 env[1319]: time="2025-07-14T21:58:35.379063826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:58:35.379676 env[1319]: time="2025-07-14T21:58:35.379643304Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2 pid=4487 runtime=io.containerd.runc.v2 Jul 14 21:58:35.403613 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif639d6b81c0: link becomes ready Jul 14 21:58:35.404225 systemd-networkd[1104]: calif639d6b81c0: Link UP Jul 14 21:58:35.404672 systemd-networkd[1104]: calif639d6b81c0: Gained carrier Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.207 [INFO][4378] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0 coredns-7c65d6cfc9- kube-system 7023e1db-2106-48dc-85a1-3f1e832bd4ba 999 0 2025-07-14 21:57:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-tbjx5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif639d6b81c0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tbjx5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tbjx5-" Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.207 [INFO][4378] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tbjx5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.241 [INFO][4411] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" HandleID="k8s-pod-network.ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" Workload="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.242 [INFO][4411] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" HandleID="k8s-pod-network.ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" Workload="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-tbjx5", "timestamp":"2025-07-14 21:58:35.241866703 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.242 [INFO][4411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.281 [INFO][4411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.281 [INFO][4411] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.356 [INFO][4411] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" host="localhost" Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.361 [INFO][4411] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.372 [INFO][4411] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.374 [INFO][4411] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.376 [INFO][4411] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.377 [INFO][4411] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" host="localhost" Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.378 [INFO][4411] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.382 [INFO][4411] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" host="localhost" Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.389 [INFO][4411] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" host="localhost" Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.389 [INFO][4411] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" host="localhost" Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.389 [INFO][4411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:35.419809 env[1319]: 2025-07-14 21:58:35.389 [INFO][4411] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" HandleID="k8s-pod-network.ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" Workload="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" Jul 14 21:58:35.420400 env[1319]: 2025-07-14 21:58:35.396 [INFO][4378] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tbjx5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7023e1db-2106-48dc-85a1-3f1e832bd4ba", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-tbjx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif639d6b81c0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:35.420400 env[1319]: 2025-07-14 21:58:35.396 [INFO][4378] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tbjx5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" Jul 14 21:58:35.420400 env[1319]: 2025-07-14 21:58:35.396 [INFO][4378] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif639d6b81c0 ContainerID="ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tbjx5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" Jul 14 21:58:35.420400 env[1319]: 2025-07-14 21:58:35.404 [INFO][4378] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tbjx5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" Jul 14 21:58:35.420400 env[1319]: 2025-07-14 21:58:35.406 [INFO][4378] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tbjx5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7023e1db-2106-48dc-85a1-3f1e832bd4ba", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c", Pod:"coredns-7c65d6cfc9-tbjx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif639d6b81c0", MAC:"46:c4:d3:ca:5f:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:35.420400 env[1319]: 2025-07-14 21:58:35.415 [INFO][4378] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tbjx5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" Jul 14 21:58:35.426100 systemd-resolved[1240]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:58:35.444000 audit[4529]: NETFILTER_CFG table=filter:116 family=2 entries=58 op=nft_register_chain pid=4529 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 21:58:35.444000 audit[4529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26760 a0=3 a1=ffffff62ed20 a2=0 a3=ffff9be91fa8 items=0 ppid=3588 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:35.444000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 21:58:35.447624 env[1319]: time="2025-07-14T21:58:35.447552208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-8mddv,Uid:0fb397a8-167c-4a3c-b754-5643d7b757de,Namespace:calico-system,Attempt:1,} returns sandbox id \"a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2\"" Jul 14 21:58:35.451861 env[1319]: time="2025-07-14T21:58:35.450876314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:58:35.451861 env[1319]: time="2025-07-14T21:58:35.450950434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:58:35.451861 env[1319]: time="2025-07-14T21:58:35.450976954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:58:35.451861 env[1319]: time="2025-07-14T21:58:35.451281432Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c pid=4539 runtime=io.containerd.runc.v2 Jul 14 21:58:35.494448 systemd-resolved[1240]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:58:35.517853 env[1319]: time="2025-07-14T21:58:35.517807183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tbjx5,Uid:7023e1db-2106-48dc-85a1-3f1e832bd4ba,Namespace:kube-system,Attempt:1,} returns sandbox id \"ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c\"" Jul 14 21:58:35.518640 kubelet[2191]: E0714 21:58:35.518615 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:58:35.521257 env[1319]: time="2025-07-14T21:58:35.521207328Z" level=info msg="CreateContainer within sandbox \"ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:58:35.532478 env[1319]: time="2025-07-14T21:58:35.532443239Z" level=info msg="CreateContainer within sandbox \"ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c3ca136646d87e1421b7993ceeee1afae83256c1f831eca55291da2c78efc528\"" Jul 14 21:58:35.532996 env[1319]: time="2025-07-14T21:58:35.532971677Z" level=info msg="StartContainer for \"c3ca136646d87e1421b7993ceeee1afae83256c1f831eca55291da2c78efc528\"" Jul 14 21:58:35.605076 env[1319]: time="2025-07-14T21:58:35.605011363Z" level=info msg="StartContainer for \"c3ca136646d87e1421b7993ceeee1afae83256c1f831eca55291da2c78efc528\" returns successfully" Jul 14 21:58:35.714773 systemd-networkd[1104]: calic57ebfb3bf1: Gained IPv6LL Jul 14 21:58:35.901411 env[1319]: time="2025-07-14T21:58:35.901357074Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:35.902814 env[1319]: time="2025-07-14T21:58:35.902786788Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:35.904229 env[1319]: time="2025-07-14T21:58:35.904197822Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:35.905645 env[1319]: time="2025-07-14T21:58:35.905614216Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:35.906108 env[1319]: time="2025-07-14T21:58:35.906084654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 14 21:58:35.909408 env[1319]: time="2025-07-14T21:58:35.907751966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 21:58:35.916362 env[1319]: time="2025-07-14T21:58:35.915196374Z" level=info msg="CreateContainer within sandbox \"1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 14 21:58:35.925171 env[1319]: time="2025-07-14T21:58:35.924670453Z" level=info msg="CreateContainer within sandbox \"1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"17e2162caf8f4400320837e77acb7df78689f9c624f91cc504567f498f7337f8\"" Jul 14 21:58:35.925890 env[1319]: time="2025-07-14T21:58:35.925293490Z" level=info msg="StartContainer for \"17e2162caf8f4400320837e77acb7df78689f9c624f91cc504567f498f7337f8\"" Jul 14 21:58:35.997520 env[1319]: time="2025-07-14T21:58:35.997469056Z" level=info msg="StartContainer for \"17e2162caf8f4400320837e77acb7df78689f9c624f91cc504567f498f7337f8\" returns successfully" Jul 14 21:58:36.067946 kubelet[2191]: E0714 21:58:36.067811 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:58:36.073239 kubelet[2191]: E0714 21:58:36.073201 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:58:36.081475 kubelet[2191]: I0714 21:58:36.081411 2191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tbjx5" podStartSLOduration=40.081397076 podStartE2EDuration="40.081397076s" podCreationTimestamp="2025-07-14 21:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:58:36.079147564 +0000 UTC m=+47.268751586" watchObservedRunningTime="2025-07-14 21:58:36.081397076 +0000 UTC m=+47.271001098" Jul 14 21:58:36.109000 audit[4669]: NETFILTER_CFG table=filter:117 family=2 entries=12 op=nft_register_rule pid=4669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:36.109000 audit[4669]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffc5865ea0 a2=0 a3=1 items=0 ppid=2295 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:36.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:36.127000 audit[4669]: NETFILTER_CFG table=nat:118 family=2 entries=58 op=nft_register_chain pid=4669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:36.127000 audit[4669]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20628 a0=3 a1=ffffc5865ea0 a2=0 a3=1 items=0 ppid=2295 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:36.127000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:36.155104 kubelet[2191]: I0714 21:58:36.154928 2191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67865bb6d5-jb527" podStartSLOduration=20.524827085 podStartE2EDuration="23.154907295s" podCreationTimestamp="2025-07-14 21:58:13 +0000 UTC" firstStartedPulling="2025-07-14 21:58:33.277057959 +0000 UTC m=+44.466661941" lastFinishedPulling="2025-07-14 21:58:35.907138049 +0000 UTC m=+47.096742151" observedRunningTime="2025-07-14 21:58:36.105064952 +0000 UTC m=+47.294668974" watchObservedRunningTime="2025-07-14 21:58:36.154907295 +0000 UTC m=+47.344511317" Jul 14 21:58:36.418860 systemd-networkd[1104]: cali6ba182e0ac7: Gained IPv6LL Jul 14 21:58:36.995709 systemd-networkd[1104]: calif639d6b81c0: Gained IPv6LL Jul 14 21:58:37.077401 kubelet[2191]: E0714 21:58:37.077039 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:58:37.077401 kubelet[2191]: E0714 21:58:37.077238 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:58:37.250729 systemd-networkd[1104]: cali97129d37a07: Gained IPv6LL Jul 14 21:58:37.904519 env[1319]: time="2025-07-14T21:58:37.903556493Z" level=info msg="StopPodSandbox for \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\"" Jul 14 21:58:37.990687 env[1319]: 2025-07-14 21:58:37.956 [INFO][4696] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Jul 14 21:58:37.990687 env[1319]: 2025-07-14 21:58:37.956 [INFO][4696] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" iface="eth0" netns="/var/run/netns/cni-74c79df6-c934-d709-fc91-e322899b2b0d" Jul 14 21:58:37.990687 env[1319]: 2025-07-14 21:58:37.956 [INFO][4696] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" iface="eth0" netns="/var/run/netns/cni-74c79df6-c934-d709-fc91-e322899b2b0d" Jul 14 21:58:37.990687 env[1319]: 2025-07-14 21:58:37.956 [INFO][4696] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" iface="eth0" netns="/var/run/netns/cni-74c79df6-c934-d709-fc91-e322899b2b0d" Jul 14 21:58:37.990687 env[1319]: 2025-07-14 21:58:37.956 [INFO][4696] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Jul 14 21:58:37.990687 env[1319]: 2025-07-14 21:58:37.956 [INFO][4696] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Jul 14 21:58:37.990687 env[1319]: 2025-07-14 21:58:37.977 [INFO][4705] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" HandleID="k8s-pod-network.c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Workload="localhost-k8s-csi--node--driver--6vscj-eth0" Jul 14 21:58:37.990687 env[1319]: 2025-07-14 21:58:37.977 [INFO][4705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:37.990687 env[1319]: 2025-07-14 21:58:37.977 [INFO][4705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:37.990687 env[1319]: 2025-07-14 21:58:37.985 [WARNING][4705] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" HandleID="k8s-pod-network.c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Workload="localhost-k8s-csi--node--driver--6vscj-eth0" Jul 14 21:58:37.990687 env[1319]: 2025-07-14 21:58:37.985 [INFO][4705] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" HandleID="k8s-pod-network.c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Workload="localhost-k8s-csi--node--driver--6vscj-eth0" Jul 14 21:58:37.990687 env[1319]: 2025-07-14 21:58:37.987 [INFO][4705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:37.990687 env[1319]: 2025-07-14 21:58:37.989 [INFO][4696] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Jul 14 21:58:37.991880 env[1319]: time="2025-07-14T21:58:37.991840170Z" level=info msg="TearDown network for sandbox \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\" successfully" Jul 14 21:58:37.991987 env[1319]: time="2025-07-14T21:58:37.991967769Z" level=info msg="StopPodSandbox for \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\" returns successfully" Jul 14 21:58:37.994307 systemd[1]: run-netns-cni\x2d74c79df6\x2dc934\x2dd709\x2dfc91\x2de322899b2b0d.mount: Deactivated successfully. Jul 14 21:58:37.995462 env[1319]: time="2025-07-14T21:58:37.995415040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6vscj,Uid:b453fdfd-5b94-4411-a498-a6ed452275d0,Namespace:calico-system,Attempt:1,}" Jul 14 21:58:38.081859 kubelet[2191]: E0714 21:58:38.078942 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:58:38.092810 env[1319]: time="2025-07-14T21:58:38.092740601Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:38.097152 env[1319]: time="2025-07-14T21:58:38.097123432Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:38.100308 env[1319]: time="2025-07-14T21:58:38.100273946Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:38.102078 env[1319]: time="2025-07-14T21:58:38.102043902Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:38.102942 env[1319]: time="2025-07-14T21:58:38.102908741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 14 21:58:38.105667 env[1319]: time="2025-07-14T21:58:38.105638935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 21:58:38.107050 env[1319]: time="2025-07-14T21:58:38.107014492Z" level=info msg="CreateContainer within sandbox \"49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 21:58:38.126911 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 14 21:58:38.127247 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali08fe35cb2b5: link becomes ready Jul 14 21:58:38.125138 systemd-networkd[1104]: cali08fe35cb2b5: Link UP Jul 14 21:58:38.127643 systemd-networkd[1104]: cali08fe35cb2b5: Gained carrier Jul 14 21:58:38.134436 env[1319]: time="2025-07-14T21:58:38.132922561Z" level=info msg="CreateContainer within sandbox \"49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"70cd3bbfa3025b8871ff530810635d849d1535888d50c3e116f0883bea3e7700\"" Jul 14 21:58:38.137816 env[1319]: time="2025-07-14T21:58:38.137770871Z" level=info msg="StartContainer for \"70cd3bbfa3025b8871ff530810635d849d1535888d50c3e116f0883bea3e7700\"" Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.048 [INFO][4713] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6vscj-eth0 csi-node-driver- calico-system b453fdfd-5b94-4411-a498-a6ed452275d0 1054 0 2025-07-14 21:58:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6vscj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali08fe35cb2b5 [] [] }} ContainerID="72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" Namespace="calico-system" Pod="csi-node-driver-6vscj" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vscj-" Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.048 [INFO][4713] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" Namespace="calico-system" Pod="csi-node-driver-6vscj" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vscj-eth0" Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.075 [INFO][4729] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" HandleID="k8s-pod-network.72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" Workload="localhost-k8s-csi--node--driver--6vscj-eth0" Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.075 [INFO][4729] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" HandleID="k8s-pod-network.72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" Workload="localhost-k8s-csi--node--driver--6vscj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6vscj", "timestamp":"2025-07-14 21:58:38.075073476 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.075 [INFO][4729] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.075 [INFO][4729] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.075 [INFO][4729] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.086 [INFO][4729] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" host="localhost" Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.094 [INFO][4729] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.098 [INFO][4729] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.100 [INFO][4729] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.102 [INFO][4729] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.102 [INFO][4729] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" host="localhost" Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.104 [INFO][4729] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7 Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.109 [INFO][4729] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" host="localhost" Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.116 [INFO][4729] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" host="localhost" Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.116 [INFO][4729] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" host="localhost" Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.116 [INFO][4729] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:38.143320 env[1319]: 2025-07-14 21:58:38.116 [INFO][4729] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" HandleID="k8s-pod-network.72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" Workload="localhost-k8s-csi--node--driver--6vscj-eth0" Jul 14 21:58:38.143949 env[1319]: 2025-07-14 21:58:38.118 [INFO][4713] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" Namespace="calico-system" Pod="csi-node-driver-6vscj" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vscj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6vscj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b453fdfd-5b94-4411-a498-a6ed452275d0", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6vscj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali08fe35cb2b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:38.143949 env[1319]: 2025-07-14 21:58:38.119 [INFO][4713] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" Namespace="calico-system" Pod="csi-node-driver-6vscj" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vscj-eth0" Jul 14 21:58:38.143949 env[1319]: 2025-07-14 21:58:38.119 [INFO][4713] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08fe35cb2b5 ContainerID="72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" Namespace="calico-system" Pod="csi-node-driver-6vscj" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vscj-eth0" Jul 14 21:58:38.143949 env[1319]: 2025-07-14 21:58:38.128 [INFO][4713] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" Namespace="calico-system" Pod="csi-node-driver-6vscj" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vscj-eth0" Jul 14 21:58:38.143949 env[1319]: 2025-07-14 21:58:38.128 [INFO][4713] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" Namespace="calico-system" Pod="csi-node-driver-6vscj" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vscj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6vscj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b453fdfd-5b94-4411-a498-a6ed452275d0", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7", Pod:"csi-node-driver-6vscj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali08fe35cb2b5", MAC:"ce:f3:3d:93:d6:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:38.143949 env[1319]: 2025-07-14 21:58:38.141 [INFO][4713] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7" Namespace="calico-system" Pod="csi-node-driver-6vscj" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vscj-eth0" Jul 14 21:58:38.159031 env[1319]: time="2025-07-14T21:58:38.157035233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:58:38.159031 env[1319]: time="2025-07-14T21:58:38.157129792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:58:38.159031 env[1319]: time="2025-07-14T21:58:38.157156992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:58:38.164000 audit[4784]: NETFILTER_CFG table=filter:119 family=2 entries=56 op=nft_register_chain pid=4784 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 21:58:38.164000 audit[4784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25500 a0=3 a1=ffffe49b27e0 a2=0 a3=ffff8b146fa8 items=0 ppid=3588 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:38.164000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 21:58:38.167187 env[1319]: time="2025-07-14T21:58:38.164048259Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7 pid=4773 runtime=io.containerd.runc.v2 Jul 14 21:58:38.195663 systemd-resolved[1240]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:58:38.210734 env[1319]: time="2025-07-14T21:58:38.210693566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6vscj,Uid:b453fdfd-5b94-4411-a498-a6ed452275d0,Namespace:calico-system,Attempt:1,} returns sandbox id \"72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7\"" Jul 14 21:58:38.219632 env[1319]: time="2025-07-14T21:58:38.217852151Z" level=info msg="StartContainer for \"70cd3bbfa3025b8871ff530810635d849d1535888d50c3e116f0883bea3e7700\" returns successfully" Jul 14 21:58:38.353430 env[1319]: time="2025-07-14T21:58:38.353376321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:38.354711 env[1319]: time="2025-07-14T21:58:38.354668798Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:38.356090 env[1319]: time="2025-07-14T21:58:38.356053676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:38.357461 env[1319]: time="2025-07-14T21:58:38.357432393Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:38.358013 env[1319]: time="2025-07-14T21:58:38.357986992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 14 21:58:38.359011 env[1319]: time="2025-07-14T21:58:38.358970830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 14 21:58:38.360232 env[1319]: time="2025-07-14T21:58:38.360204187Z" level=info msg="CreateContainer within sandbox \"384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 21:58:38.376664 env[1319]: time="2025-07-14T21:58:38.376608235Z" level=info msg="CreateContainer within sandbox \"384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"298994a5d8f22d56c176096e028436cd9d5dd42bff3e73b168734c54aa00f8d3\"" Jul 14 21:58:38.377402 env[1319]: time="2025-07-14T21:58:38.377365633Z" level=info msg="StartContainer for \"298994a5d8f22d56c176096e028436cd9d5dd42bff3e73b168734c54aa00f8d3\"" Jul 14 21:58:38.437023 env[1319]: time="2025-07-14T21:58:38.436924154Z" level=info msg="StartContainer for \"298994a5d8f22d56c176096e028436cd9d5dd42bff3e73b168734c54aa00f8d3\" returns successfully" Jul 14 21:58:39.109100 kubelet[2191]: I0714 21:58:39.109035 2191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79b975cf4d-tvnn9" podStartSLOduration=27.116375146 podStartE2EDuration="30.109018573s" podCreationTimestamp="2025-07-14 21:58:09 +0000 UTC" firstStartedPulling="2025-07-14 21:58:35.366157883 +0000 UTC m=+46.555761905" lastFinishedPulling="2025-07-14 21:58:38.35880131 +0000 UTC m=+49.548405332" observedRunningTime="2025-07-14 21:58:39.097255388 +0000 UTC m=+50.286859410" watchObservedRunningTime="2025-07-14 21:58:39.109018573 +0000 UTC m=+50.298622595" Jul 14 21:58:39.109498 kubelet[2191]: I0714 21:58:39.109136 2191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79b975cf4d-9xgmn" podStartSLOduration=26.324958477 podStartE2EDuration="30.109132573s" podCreationTimestamp="2025-07-14 21:58:09 +0000 UTC" firstStartedPulling="2025-07-14 21:58:34.320513601 +0000 UTC m=+45.510117623" lastFinishedPulling="2025-07-14 21:58:38.104687697 +0000 UTC m=+49.294291719" observedRunningTime="2025-07-14 21:58:39.108543253 +0000 UTC m=+50.298147355" watchObservedRunningTime="2025-07-14 21:58:39.109132573 +0000 UTC m=+50.298736595" Jul 14 21:58:39.124000 audit[4882]: NETFILTER_CFG table=filter:120 family=2 entries=12 op=nft_register_rule pid=4882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:39.124000 audit[4882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe81e63f0 a2=0 a3=1 items=0 ppid=2295 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:39.124000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:39.131000 audit[4882]: NETFILTER_CFG table=nat:121 family=2 entries=22 op=nft_register_rule pid=4882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:39.131000 audit[4882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffe81e63f0 a2=0 a3=1 items=0 ppid=2295 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:39.131000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:39.152000 audit[4884]: NETFILTER_CFG table=filter:122 family=2 entries=12 op=nft_register_rule pid=4884 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:39.155595 kernel: kauditd_printk_skb: 32 callbacks suppressed Jul 14 21:58:39.155637 kernel: audit: type=1325 audit(1752530319.152:438): table=filter:122 family=2 entries=12 op=nft_register_rule pid=4884 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:39.152000 audit[4884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffeb7f6c60 a2=0 a3=1 items=0 ppid=2295 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:39.160085 kernel: audit: type=1300 audit(1752530319.152:438): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffeb7f6c60 a2=0 a3=1 items=0 ppid=2295 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:39.152000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:39.163594 kernel: audit: type=1327 audit(1752530319.152:438): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:39.165000 audit[4884]: NETFILTER_CFG table=nat:123 family=2 entries=22 op=nft_register_rule pid=4884 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:39.165000 audit[4884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffeb7f6c60 a2=0 a3=1 items=0 ppid=2295 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:39.172682 kernel: audit: type=1325 audit(1752530319.165:439): table=nat:123 family=2 entries=22 op=nft_register_rule pid=4884 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:39.172734 kernel: audit: type=1300 audit(1752530319.165:439): arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffeb7f6c60 a2=0 a3=1 items=0 ppid=2295 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:39.165000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:39.174362 kernel: audit: type=1327 audit(1752530319.165:439): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:39.298702 systemd-networkd[1104]: cali08fe35cb2b5: Gained IPv6LL Jul 14 21:58:40.089560 kubelet[2191]: I0714 21:58:40.089525 2191 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:58:40.185000 audit[4887]: NETFILTER_CFG table=filter:124 family=2 entries=11 op=nft_register_rule pid=4887 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:40.185000 audit[4887]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffff8db4d0 a2=0 a3=1 items=0 ppid=2295 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:40.191145 kernel: audit: type=1325 audit(1752530320.185:440): table=filter:124 family=2 entries=11 op=nft_register_rule pid=4887 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:40.191230 kernel: audit: type=1300 audit(1752530320.185:440): arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffff8db4d0 a2=0 a3=1 items=0 ppid=2295 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:40.191272 kernel: audit: type=1327 audit(1752530320.185:440): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:40.185000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:40.192000 audit[4887]: NETFILTER_CFG table=nat:125 family=2 entries=29 op=nft_register_chain pid=4887 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:40.192000 audit[4887]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=ffffff8db4d0 a2=0 a3=1 items=0 ppid=2295 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:40.192000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:40.196601 kernel: audit: type=1325 audit(1752530320.192:441): table=nat:125 family=2 entries=29 op=nft_register_chain pid=4887 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:40.372242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201006640.mount: Deactivated successfully. Jul 14 21:58:40.970749 env[1319]: time="2025-07-14T21:58:40.970700082Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:40.972466 env[1319]: time="2025-07-14T21:58:40.972425001Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:40.974190 env[1319]: time="2025-07-14T21:58:40.974163200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:40.975448 env[1319]: time="2025-07-14T21:58:40.975421800Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:40.976120 env[1319]: time="2025-07-14T21:58:40.976090599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 14 21:58:40.977994 env[1319]: time="2025-07-14T21:58:40.977947078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 14 21:58:40.978538 env[1319]: time="2025-07-14T21:58:40.978510398Z" level=info msg="CreateContainer within sandbox \"a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 14 21:58:40.992839 env[1319]: time="2025-07-14T21:58:40.992794230Z" level=info msg="CreateContainer within sandbox \"a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a6ee39b3ec3486e9d5db271dd640574a3a4514566ddcd8bd17d4d8f19555f555\"" Jul 14 21:58:40.993517 env[1319]: time="2025-07-14T21:58:40.993459310Z" level=info msg="StartContainer for \"a6ee39b3ec3486e9d5db271dd640574a3a4514566ddcd8bd17d4d8f19555f555\"" Jul 14 21:58:41.070571 env[1319]: time="2025-07-14T21:58:41.070516556Z" level=info msg="StartContainer for \"a6ee39b3ec3486e9d5db271dd640574a3a4514566ddcd8bd17d4d8f19555f555\" returns successfully" Jul 14 21:58:41.110192 kubelet[2191]: I0714 21:58:41.109990 2191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-8mddv" podStartSLOduration=22.581699285 podStartE2EDuration="28.109961881s" podCreationTimestamp="2025-07-14 21:58:13 +0000 UTC" firstStartedPulling="2025-07-14 21:58:35.448877043 +0000 UTC m=+46.638481025" lastFinishedPulling="2025-07-14 21:58:40.977139599 +0000 UTC m=+52.166743621" observedRunningTime="2025-07-14 21:58:41.109850081 +0000 UTC m=+52.299454103" watchObservedRunningTime="2025-07-14 21:58:41.109961881 +0000 UTC m=+52.299565903" Jul 14 21:58:41.123000 audit[4932]: NETFILTER_CFG table=filter:126 family=2 entries=10 op=nft_register_rule pid=4932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:41.123000 audit[4932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=fffff55bc830 a2=0 a3=1 items=0 ppid=2295 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:41.123000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:41.128000 audit[4932]: NETFILTER_CFG table=nat:127 family=2 entries=24 op=nft_register_rule pid=4932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:41.128000 audit[4932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7308 a0=3 a1=fffff55bc830 a2=0 a3=1 items=0 ppid=2295 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:41.128000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:42.115597 systemd[1]: run-containerd-runc-k8s.io-a6ee39b3ec3486e9d5db271dd640574a3a4514566ddcd8bd17d4d8f19555f555-runc.gnRgTH.mount: Deactivated successfully. Jul 14 21:58:42.201099 env[1319]: time="2025-07-14T21:58:42.201047655Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:42.203822 env[1319]: time="2025-07-14T21:58:42.203772378Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:42.205542 env[1319]: time="2025-07-14T21:58:42.205511699Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:42.206781 env[1319]: time="2025-07-14T21:58:42.206748780Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:42.207999 env[1319]: time="2025-07-14T21:58:42.207963021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 14 21:58:42.210513 env[1319]: time="2025-07-14T21:58:42.210181383Z" level=info msg="CreateContainer within sandbox \"72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 14 21:58:42.224160 env[1319]: time="2025-07-14T21:58:42.224113594Z" level=info msg="CreateContainer within sandbox \"72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"87fc5194dc0eb7582b21e67b86b6f6e79ec868a2611d00025421e7506440d376\"" Jul 14 21:58:42.226064 env[1319]: time="2025-07-14T21:58:42.224795875Z" level=info msg="StartContainer for \"87fc5194dc0eb7582b21e67b86b6f6e79ec868a2611d00025421e7506440d376\"" Jul 14 21:58:42.282418 env[1319]: time="2025-07-14T21:58:42.282362802Z" level=info msg="StartContainer for \"87fc5194dc0eb7582b21e67b86b6f6e79ec868a2611d00025421e7506440d376\" returns successfully" Jul 14 21:58:42.284275 env[1319]: time="2025-07-14T21:58:42.284053323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 14 21:58:43.119438 systemd[1]: run-containerd-runc-k8s.io-a6ee39b3ec3486e9d5db271dd640574a3a4514566ddcd8bd17d4d8f19555f555-runc.BqYLNS.mount: Deactivated successfully. Jul 14 21:58:43.539879 env[1319]: time="2025-07-14T21:58:43.539761058Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:43.541524 env[1319]: time="2025-07-14T21:58:43.541482461Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:43.545002 env[1319]: time="2025-07-14T21:58:43.544967626Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:43.546776 env[1319]: time="2025-07-14T21:58:43.546738988Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 21:58:43.547728 env[1319]: time="2025-07-14T21:58:43.547688950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 14 21:58:43.550899 env[1319]: time="2025-07-14T21:58:43.550862594Z" level=info msg="CreateContainer within sandbox \"72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 14 21:58:43.565358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2676098156.mount: Deactivated successfully. Jul 14 21:58:43.568373 env[1319]: time="2025-07-14T21:58:43.568330780Z" level=info msg="CreateContainer within sandbox \"72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7ad52593823118976ee2e1bbd02ba6f277859ef897572ecc47281ed687d7262e\"" Jul 14 21:58:43.570935 env[1319]: time="2025-07-14T21:58:43.570895744Z" level=info msg="StartContainer for \"7ad52593823118976ee2e1bbd02ba6f277859ef897572ecc47281ed687d7262e\"" Jul 14 21:58:43.678093 env[1319]: time="2025-07-14T21:58:43.678033861Z" level=info msg="StartContainer for \"7ad52593823118976ee2e1bbd02ba6f277859ef897572ecc47281ed687d7262e\" returns successfully" Jul 14 21:58:43.995146 kubelet[2191]: I0714 21:58:43.995103 2191 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 14 21:58:43.996363 kubelet[2191]: I0714 21:58:43.996315 2191 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 14 21:58:44.114990 kubelet[2191]: I0714 21:58:44.114923 2191 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6vscj" podStartSLOduration=25.779106063 podStartE2EDuration="31.114905332s" podCreationTimestamp="2025-07-14 21:58:13 +0000 UTC" firstStartedPulling="2025-07-14 21:58:38.212653562 +0000 UTC m=+49.402257584" lastFinishedPulling="2025-07-14 21:58:43.548452831 +0000 UTC m=+54.738056853" observedRunningTime="2025-07-14 21:58:44.114543412 +0000 UTC m=+55.304147554" watchObservedRunningTime="2025-07-14 21:58:44.114905332 +0000 UTC m=+55.304509354" Jul 14 21:58:44.561846 systemd[1]: run-containerd-runc-k8s.io-92ad90cc85a743a68a530f8402f9f31f0d3df3e158813ecbd6f2fbacf0a6a0c9-runc.V9VTzM.mount: Deactivated successfully. Jul 14 21:58:45.465689 systemd[1]: run-containerd-runc-k8s.io-a6ee39b3ec3486e9d5db271dd640574a3a4514566ddcd8bd17d4d8f19555f555-runc.0mj53o.mount: Deactivated successfully. Jul 14 21:58:48.905168 env[1319]: time="2025-07-14T21:58:48.905129722Z" level=info msg="StopPodSandbox for \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\"" Jul 14 21:58:49.030278 env[1319]: 2025-07-14 21:58:48.977 [WARNING][5121] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6vscj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b453fdfd-5b94-4411-a498-a6ed452275d0", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7", Pod:"csi-node-driver-6vscj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali08fe35cb2b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:49.030278 env[1319]: 2025-07-14 21:58:48.978 [INFO][5121] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Jul 14 21:58:49.030278 env[1319]: 2025-07-14 21:58:48.978 [INFO][5121] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" iface="eth0" netns="" Jul 14 21:58:49.030278 env[1319]: 2025-07-14 21:58:48.978 [INFO][5121] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Jul 14 21:58:49.030278 env[1319]: 2025-07-14 21:58:48.978 [INFO][5121] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Jul 14 21:58:49.030278 env[1319]: 2025-07-14 21:58:49.011 [INFO][5134] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" HandleID="k8s-pod-network.c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Workload="localhost-k8s-csi--node--driver--6vscj-eth0" Jul 14 21:58:49.030278 env[1319]: 2025-07-14 21:58:49.011 [INFO][5134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:49.030278 env[1319]: 2025-07-14 21:58:49.011 [INFO][5134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:49.030278 env[1319]: 2025-07-14 21:58:49.024 [WARNING][5134] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" HandleID="k8s-pod-network.c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Workload="localhost-k8s-csi--node--driver--6vscj-eth0" Jul 14 21:58:49.030278 env[1319]: 2025-07-14 21:58:49.025 [INFO][5134] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" HandleID="k8s-pod-network.c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Workload="localhost-k8s-csi--node--driver--6vscj-eth0" Jul 14 21:58:49.030278 env[1319]: 2025-07-14 21:58:49.026 [INFO][5134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:49.030278 env[1319]: 2025-07-14 21:58:49.028 [INFO][5121] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Jul 14 21:58:49.030748 env[1319]: time="2025-07-14T21:58:49.030309570Z" level=info msg="TearDown network for sandbox \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\" successfully" Jul 14 21:58:49.030748 env[1319]: time="2025-07-14T21:58:49.030341530Z" level=info msg="StopPodSandbox for \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\" returns successfully" Jul 14 21:58:49.030937 env[1319]: time="2025-07-14T21:58:49.030897973Z" level=info msg="RemovePodSandbox for \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\"" Jul 14 21:58:49.030980 env[1319]: time="2025-07-14T21:58:49.030951333Z" level=info msg="Forcibly stopping sandbox \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\"" Jul 14 21:58:49.129645 env[1319]: 2025-07-14 21:58:49.076 [WARNING][5156] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6vscj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b453fdfd-5b94-4411-a498-a6ed452275d0", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72835c61f2827b3120d98396703e2d65ac8698899d63d95b5dfb58f8b1b226c7", Pod:"csi-node-driver-6vscj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali08fe35cb2b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:49.129645 env[1319]: 2025-07-14 21:58:49.076 [INFO][5156] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Jul 14 21:58:49.129645 env[1319]: 2025-07-14 21:58:49.076 [INFO][5156] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" iface="eth0" netns="" Jul 14 21:58:49.129645 env[1319]: 2025-07-14 21:58:49.076 [INFO][5156] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Jul 14 21:58:49.129645 env[1319]: 2025-07-14 21:58:49.076 [INFO][5156] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Jul 14 21:58:49.129645 env[1319]: 2025-07-14 21:58:49.110 [INFO][5165] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" HandleID="k8s-pod-network.c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Workload="localhost-k8s-csi--node--driver--6vscj-eth0" Jul 14 21:58:49.129645 env[1319]: 2025-07-14 21:58:49.111 [INFO][5165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:49.129645 env[1319]: 2025-07-14 21:58:49.111 [INFO][5165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:49.129645 env[1319]: 2025-07-14 21:58:49.121 [WARNING][5165] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" HandleID="k8s-pod-network.c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Workload="localhost-k8s-csi--node--driver--6vscj-eth0" Jul 14 21:58:49.129645 env[1319]: 2025-07-14 21:58:49.121 [INFO][5165] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" HandleID="k8s-pod-network.c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Workload="localhost-k8s-csi--node--driver--6vscj-eth0" Jul 14 21:58:49.129645 env[1319]: 2025-07-14 21:58:49.123 [INFO][5165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:49.129645 env[1319]: 2025-07-14 21:58:49.127 [INFO][5156] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c" Jul 14 21:58:49.129645 env[1319]: time="2025-07-14T21:58:49.129565822Z" level=info msg="TearDown network for sandbox \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\" successfully" Jul 14 21:58:49.134605 env[1319]: time="2025-07-14T21:58:49.134465686Z" level=info msg="RemovePodSandbox \"c7ae5f85233885dadd7e449d0eaacd0fd310d65ba3a813555836df26f7d86c9c\" returns successfully" Jul 14 21:58:49.137309 env[1319]: time="2025-07-14T21:58:49.135258530Z" level=info msg="StopPodSandbox for \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\"" Jul 14 21:58:49.264493 env[1319]: 2025-07-14 21:58:49.177 [WARNING][5183] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0", GenerateName:"calico-apiserver-79b975cf4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e01068b-a03a-4c0c-99a9-7e9275cb210b", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79b975cf4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2", Pod:"calico-apiserver-79b975cf4d-tvnn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97129d37a07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:49.264493 env[1319]: 2025-07-14 21:58:49.178 [INFO][5183] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Jul 14 21:58:49.264493 env[1319]: 2025-07-14 21:58:49.178 [INFO][5183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" iface="eth0" netns="" Jul 14 21:58:49.264493 env[1319]: 2025-07-14 21:58:49.178 [INFO][5183] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Jul 14 21:58:49.264493 env[1319]: 2025-07-14 21:58:49.178 [INFO][5183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Jul 14 21:58:49.264493 env[1319]: 2025-07-14 21:58:49.249 [INFO][5192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" HandleID="k8s-pod-network.bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Workload="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" Jul 14 21:58:49.264493 env[1319]: 2025-07-14 21:58:49.250 [INFO][5192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:49.264493 env[1319]: 2025-07-14 21:58:49.250 [INFO][5192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:49.264493 env[1319]: 2025-07-14 21:58:49.258 [WARNING][5192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" HandleID="k8s-pod-network.bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Workload="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" Jul 14 21:58:49.264493 env[1319]: 2025-07-14 21:58:49.258 [INFO][5192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" HandleID="k8s-pod-network.bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Workload="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" Jul 14 21:58:49.264493 env[1319]: 2025-07-14 21:58:49.260 [INFO][5192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:49.264493 env[1319]: 2025-07-14 21:58:49.262 [INFO][5183] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Jul 14 21:58:49.264493 env[1319]: time="2025-07-14T21:58:49.264466690Z" level=info msg="TearDown network for sandbox \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\" successfully" Jul 14 21:58:49.264943 env[1319]: time="2025-07-14T21:58:49.264506090Z" level=info msg="StopPodSandbox for \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\" returns successfully" Jul 14 21:58:49.266322 env[1319]: time="2025-07-14T21:58:49.265314254Z" level=info msg="RemovePodSandbox for \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\"" Jul 14 21:58:49.266322 env[1319]: time="2025-07-14T21:58:49.265364455Z" level=info msg="Forcibly stopping sandbox \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\"" Jul 14 21:58:49.332158 env[1319]: 2025-07-14 21:58:49.298 [WARNING][5209] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0", GenerateName:"calico-apiserver-79b975cf4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e01068b-a03a-4c0c-99a9-7e9275cb210b", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79b975cf4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"384543a2527f4de6b616dbc7029df9f2dd058d3bb6079a2411c46daabb71d3e2", Pod:"calico-apiserver-79b975cf4d-tvnn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97129d37a07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:49.332158 env[1319]: 2025-07-14 21:58:49.298 [INFO][5209] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Jul 14 21:58:49.332158 env[1319]: 2025-07-14 21:58:49.299 [INFO][5209] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" iface="eth0" netns="" Jul 14 21:58:49.332158 env[1319]: 2025-07-14 21:58:49.299 [INFO][5209] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Jul 14 21:58:49.332158 env[1319]: 2025-07-14 21:58:49.299 [INFO][5209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Jul 14 21:58:49.332158 env[1319]: 2025-07-14 21:58:49.317 [INFO][5219] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" HandleID="k8s-pod-network.bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Workload="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" Jul 14 21:58:49.332158 env[1319]: 2025-07-14 21:58:49.317 [INFO][5219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:49.332158 env[1319]: 2025-07-14 21:58:49.317 [INFO][5219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:49.332158 env[1319]: 2025-07-14 21:58:49.326 [WARNING][5219] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" HandleID="k8s-pod-network.bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Workload="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" Jul 14 21:58:49.332158 env[1319]: 2025-07-14 21:58:49.326 [INFO][5219] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" HandleID="k8s-pod-network.bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Workload="localhost-k8s-calico--apiserver--79b975cf4d--tvnn9-eth0" Jul 14 21:58:49.332158 env[1319]: 2025-07-14 21:58:49.327 [INFO][5219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:49.332158 env[1319]: 2025-07-14 21:58:49.329 [INFO][5209] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380" Jul 14 21:58:49.332621 env[1319]: time="2025-07-14T21:58:49.332186626Z" level=info msg="TearDown network for sandbox \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\" successfully" Jul 14 21:58:49.335065 env[1319]: time="2025-07-14T21:58:49.335022480Z" level=info msg="RemovePodSandbox \"bd7d0f5b1132fbecfc8953bcc3f0da7fbcecc0b4914b1f02ce6c568c2c2dc380\" returns successfully" Jul 14 21:58:49.335578 env[1319]: time="2025-07-14T21:58:49.335549922Z" level=info msg="StopPodSandbox for \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\"" Jul 14 21:58:49.403041 env[1319]: 2025-07-14 21:58:49.365 [WARNING][5237] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0", GenerateName:"calico-apiserver-79b975cf4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1c592a9-3faf-4978-af8d-8d83292a3475", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79b975cf4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982", Pod:"calico-apiserver-79b975cf4d-9xgmn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic57ebfb3bf1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:49.403041 env[1319]: 2025-07-14 21:58:49.366 [INFO][5237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Jul 14 21:58:49.403041 env[1319]: 2025-07-14 21:58:49.366 [INFO][5237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" iface="eth0" netns="" Jul 14 21:58:49.403041 env[1319]: 2025-07-14 21:58:49.366 [INFO][5237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Jul 14 21:58:49.403041 env[1319]: 2025-07-14 21:58:49.366 [INFO][5237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Jul 14 21:58:49.403041 env[1319]: 2025-07-14 21:58:49.389 [INFO][5246] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" HandleID="k8s-pod-network.4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Workload="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" Jul 14 21:58:49.403041 env[1319]: 2025-07-14 21:58:49.389 [INFO][5246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:49.403041 env[1319]: 2025-07-14 21:58:49.389 [INFO][5246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:49.403041 env[1319]: 2025-07-14 21:58:49.398 [WARNING][5246] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" HandleID="k8s-pod-network.4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Workload="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" Jul 14 21:58:49.403041 env[1319]: 2025-07-14 21:58:49.398 [INFO][5246] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" HandleID="k8s-pod-network.4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Workload="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" Jul 14 21:58:49.403041 env[1319]: 2025-07-14 21:58:49.399 [INFO][5246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:49.403041 env[1319]: 2025-07-14 21:58:49.401 [INFO][5237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Jul 14 21:58:49.403683 env[1319]: time="2025-07-14T21:58:49.403555019Z" level=info msg="TearDown network for sandbox \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\" successfully" Jul 14 21:58:49.403749 env[1319]: time="2025-07-14T21:58:49.403732020Z" level=info msg="StopPodSandbox for \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\" returns successfully" Jul 14 21:58:49.404333 env[1319]: time="2025-07-14T21:58:49.404297703Z" level=info msg="RemovePodSandbox for \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\"" Jul 14 21:58:49.404392 env[1319]: time="2025-07-14T21:58:49.404342063Z" level=info msg="Forcibly stopping sandbox \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\"" Jul 14 21:58:49.474075 env[1319]: 2025-07-14 21:58:49.436 [WARNING][5264] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0", GenerateName:"calico-apiserver-79b975cf4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1c592a9-3faf-4978-af8d-8d83292a3475", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79b975cf4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49376ffc01bf2a9cf3c5f712dbebd9d7af57b8e1630f8d8d5eaa3f971b47d982", Pod:"calico-apiserver-79b975cf4d-9xgmn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic57ebfb3bf1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:49.474075 env[1319]: 2025-07-14 21:58:49.436 [INFO][5264] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Jul 14 21:58:49.474075 env[1319]: 2025-07-14 21:58:49.436 [INFO][5264] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" iface="eth0" netns="" Jul 14 21:58:49.474075 env[1319]: 2025-07-14 21:58:49.436 [INFO][5264] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Jul 14 21:58:49.474075 env[1319]: 2025-07-14 21:58:49.436 [INFO][5264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Jul 14 21:58:49.474075 env[1319]: 2025-07-14 21:58:49.456 [INFO][5273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" HandleID="k8s-pod-network.4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Workload="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" Jul 14 21:58:49.474075 env[1319]: 2025-07-14 21:58:49.456 [INFO][5273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:49.474075 env[1319]: 2025-07-14 21:58:49.456 [INFO][5273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:49.474075 env[1319]: 2025-07-14 21:58:49.467 [WARNING][5273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" HandleID="k8s-pod-network.4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Workload="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" Jul 14 21:58:49.474075 env[1319]: 2025-07-14 21:58:49.467 [INFO][5273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" HandleID="k8s-pod-network.4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Workload="localhost-k8s-calico--apiserver--79b975cf4d--9xgmn-eth0" Jul 14 21:58:49.474075 env[1319]: 2025-07-14 21:58:49.469 [INFO][5273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:49.474075 env[1319]: 2025-07-14 21:58:49.471 [INFO][5264] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677" Jul 14 21:58:49.474516 env[1319]: time="2025-07-14T21:58:49.474110889Z" level=info msg="TearDown network for sandbox \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\" successfully" Jul 14 21:58:49.477431 env[1319]: time="2025-07-14T21:58:49.477400785Z" level=info msg="RemovePodSandbox \"4288cbeea77c3d95311bfc314a4ab66cb09fcb453d00baaffdc5e36c4b640677\" returns successfully" Jul 14 21:58:49.477948 env[1319]: time="2025-07-14T21:58:49.477909428Z" level=info msg="StopPodSandbox for \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\"" Jul 14 21:58:49.547542 env[1319]: 2025-07-14 21:58:49.511 [WARNING][5291] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--8mddv-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"0fb397a8-167c-4a3c-b754-5643d7b757de", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2", Pod:"goldmane-58fd7646b9-8mddv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6ba182e0ac7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:49.547542 env[1319]: 2025-07-14 21:58:49.511 [INFO][5291] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Jul 14 21:58:49.547542 env[1319]: 2025-07-14 21:58:49.511 [INFO][5291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" iface="eth0" netns="" Jul 14 21:58:49.547542 env[1319]: 2025-07-14 21:58:49.511 [INFO][5291] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Jul 14 21:58:49.547542 env[1319]: 2025-07-14 21:58:49.511 [INFO][5291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Jul 14 21:58:49.547542 env[1319]: 2025-07-14 21:58:49.532 [INFO][5300] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" HandleID="k8s-pod-network.d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Workload="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" Jul 14 21:58:49.547542 env[1319]: 2025-07-14 21:58:49.532 [INFO][5300] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:49.547542 env[1319]: 2025-07-14 21:58:49.532 [INFO][5300] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:49.547542 env[1319]: 2025-07-14 21:58:49.541 [WARNING][5300] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" HandleID="k8s-pod-network.d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Workload="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" Jul 14 21:58:49.547542 env[1319]: 2025-07-14 21:58:49.541 [INFO][5300] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" HandleID="k8s-pod-network.d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Workload="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" Jul 14 21:58:49.547542 env[1319]: 2025-07-14 21:58:49.543 [INFO][5300] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:49.547542 env[1319]: 2025-07-14 21:58:49.544 [INFO][5291] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Jul 14 21:58:49.548099 env[1319]: time="2025-07-14T21:58:49.548061055Z" level=info msg="TearDown network for sandbox \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\" successfully" Jul 14 21:58:49.548171 env[1319]: time="2025-07-14T21:58:49.548156016Z" level=info msg="StopPodSandbox for \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\" returns successfully" Jul 14 21:58:49.548687 env[1319]: time="2025-07-14T21:58:49.548658058Z" level=info msg="RemovePodSandbox for \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\"" Jul 14 21:58:49.548753 env[1319]: time="2025-07-14T21:58:49.548694698Z" level=info msg="Forcibly stopping sandbox \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\"" Jul 14 21:58:49.621423 env[1319]: 2025-07-14 21:58:49.582 [WARNING][5317] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--8mddv-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"0fb397a8-167c-4a3c-b754-5643d7b757de", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7b6cca71e45f1ffcadfdd9eac8184364a2d4719645c07e4233ee2cd66dbbcd2", Pod:"goldmane-58fd7646b9-8mddv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6ba182e0ac7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:49.621423 env[1319]: 2025-07-14 21:58:49.583 [INFO][5317] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Jul 14 21:58:49.621423 env[1319]: 2025-07-14 21:58:49.583 [INFO][5317] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" iface="eth0" netns="" Jul 14 21:58:49.621423 env[1319]: 2025-07-14 21:58:49.583 [INFO][5317] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Jul 14 21:58:49.621423 env[1319]: 2025-07-14 21:58:49.583 [INFO][5317] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Jul 14 21:58:49.621423 env[1319]: 2025-07-14 21:58:49.605 [INFO][5325] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" HandleID="k8s-pod-network.d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Workload="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" Jul 14 21:58:49.621423 env[1319]: 2025-07-14 21:58:49.605 [INFO][5325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:49.621423 env[1319]: 2025-07-14 21:58:49.605 [INFO][5325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:49.621423 env[1319]: 2025-07-14 21:58:49.616 [WARNING][5325] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" HandleID="k8s-pod-network.d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Workload="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" Jul 14 21:58:49.621423 env[1319]: 2025-07-14 21:58:49.616 [INFO][5325] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" HandleID="k8s-pod-network.d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Workload="localhost-k8s-goldmane--58fd7646b9--8mddv-eth0" Jul 14 21:58:49.621423 env[1319]: 2025-07-14 21:58:49.617 [INFO][5325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:49.621423 env[1319]: 2025-07-14 21:58:49.619 [INFO][5317] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe" Jul 14 21:58:49.621878 env[1319]: time="2025-07-14T21:58:49.621453939Z" level=info msg="TearDown network for sandbox \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\" successfully" Jul 14 21:58:49.704688 env[1319]: time="2025-07-14T21:58:49.704633391Z" level=info msg="RemovePodSandbox \"d8d73999a7e91ad187f4060eaa3645a7c69f7be52c1a31be03c9b35af74c00fe\" returns successfully" Jul 14 21:58:49.707109 env[1319]: time="2025-07-14T21:58:49.707076363Z" level=info msg="StopPodSandbox for \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\"" Jul 14 21:58:49.784935 env[1319]: 2025-07-14 21:58:49.741 [WARNING][5343] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" WorkloadEndpoint="localhost-k8s-whisker--6fbfc6dcd--bcd56-eth0" Jul 14 21:58:49.784935 env[1319]: 2025-07-14 21:58:49.742 [INFO][5343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Jul 14 21:58:49.784935 env[1319]: 2025-07-14 21:58:49.742 [INFO][5343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" iface="eth0" netns="" Jul 14 21:58:49.784935 env[1319]: 2025-07-14 21:58:49.742 [INFO][5343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Jul 14 21:58:49.784935 env[1319]: 2025-07-14 21:58:49.742 [INFO][5343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Jul 14 21:58:49.784935 env[1319]: 2025-07-14 21:58:49.767 [INFO][5352] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" HandleID="k8s-pod-network.4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Workload="localhost-k8s-whisker--6fbfc6dcd--bcd56-eth0" Jul 14 21:58:49.784935 env[1319]: 2025-07-14 21:58:49.767 [INFO][5352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:49.784935 env[1319]: 2025-07-14 21:58:49.767 [INFO][5352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:49.784935 env[1319]: 2025-07-14 21:58:49.776 [WARNING][5352] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" HandleID="k8s-pod-network.4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Workload="localhost-k8s-whisker--6fbfc6dcd--bcd56-eth0" Jul 14 21:58:49.784935 env[1319]: 2025-07-14 21:58:49.776 [INFO][5352] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" HandleID="k8s-pod-network.4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Workload="localhost-k8s-whisker--6fbfc6dcd--bcd56-eth0" Jul 14 21:58:49.784935 env[1319]: 2025-07-14 21:58:49.778 [INFO][5352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:49.784935 env[1319]: 2025-07-14 21:58:49.780 [INFO][5343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Jul 14 21:58:49.785417 env[1319]: time="2025-07-14T21:58:49.785384591Z" level=info msg="TearDown network for sandbox \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\" successfully" Jul 14 21:58:49.785487 env[1319]: time="2025-07-14T21:58:49.785462431Z" level=info msg="StopPodSandbox for \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\" returns successfully" Jul 14 21:58:49.788015 env[1319]: time="2025-07-14T21:58:49.787987164Z" level=info msg="RemovePodSandbox for \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\"" Jul 14 21:58:49.788221 env[1319]: time="2025-07-14T21:58:49.788166005Z" level=info msg="Forcibly stopping sandbox \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\"" Jul 14 21:58:49.884756 env[1319]: 2025-07-14 21:58:49.834 [WARNING][5369] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" WorkloadEndpoint="localhost-k8s-whisker--6fbfc6dcd--bcd56-eth0" Jul 14 21:58:49.884756 env[1319]: 2025-07-14 21:58:49.834 [INFO][5369] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Jul 14 21:58:49.884756 env[1319]: 2025-07-14 21:58:49.834 [INFO][5369] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" iface="eth0" netns="" Jul 14 21:58:49.884756 env[1319]: 2025-07-14 21:58:49.834 [INFO][5369] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Jul 14 21:58:49.884756 env[1319]: 2025-07-14 21:58:49.834 [INFO][5369] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Jul 14 21:58:49.884756 env[1319]: 2025-07-14 21:58:49.860 [INFO][5378] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" HandleID="k8s-pod-network.4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Workload="localhost-k8s-whisker--6fbfc6dcd--bcd56-eth0" Jul 14 21:58:49.884756 env[1319]: 2025-07-14 21:58:49.860 [INFO][5378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:49.884756 env[1319]: 2025-07-14 21:58:49.861 [INFO][5378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:49.884756 env[1319]: 2025-07-14 21:58:49.876 [WARNING][5378] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" HandleID="k8s-pod-network.4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Workload="localhost-k8s-whisker--6fbfc6dcd--bcd56-eth0" Jul 14 21:58:49.884756 env[1319]: 2025-07-14 21:58:49.877 [INFO][5378] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" HandleID="k8s-pod-network.4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Workload="localhost-k8s-whisker--6fbfc6dcd--bcd56-eth0" Jul 14 21:58:49.884756 env[1319]: 2025-07-14 21:58:49.880 [INFO][5378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:49.884756 env[1319]: 2025-07-14 21:58:49.882 [INFO][5369] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005" Jul 14 21:58:49.885237 env[1319]: time="2025-07-14T21:58:49.885191725Z" level=info msg="TearDown network for sandbox \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\" successfully" Jul 14 21:58:49.911276 env[1319]: time="2025-07-14T21:58:49.911231014Z" level=info msg="RemovePodSandbox \"4dbc944153be85b5c25a13f30948a18ec54c8f6685952ebed691d7ddbe423005\" returns successfully" Jul 14 21:58:49.912170 env[1319]: time="2025-07-14T21:58:49.912138139Z" level=info msg="StopPodSandbox for \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\"" Jul 14 21:58:50.022630 env[1319]: 2025-07-14 21:58:49.953 [WARNING][5397] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0", GenerateName:"calico-kube-controllers-67865bb6d5-", Namespace:"calico-system", SelfLink:"", UID:"c1a1b271-e606-49ed-b47b-b98b88fdbed2", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67865bb6d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4", Pod:"calico-kube-controllers-67865bb6d5-jb527", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3dc9a25054f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:50.022630 env[1319]: 2025-07-14 21:58:49.953 [INFO][5397] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Jul 14 21:58:50.022630 env[1319]: 2025-07-14 21:58:49.954 [INFO][5397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" iface="eth0" netns="" Jul 14 21:58:50.022630 env[1319]: 2025-07-14 21:58:49.954 [INFO][5397] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Jul 14 21:58:50.022630 env[1319]: 2025-07-14 21:58:49.954 [INFO][5397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Jul 14 21:58:50.022630 env[1319]: 2025-07-14 21:58:49.976 [INFO][5407] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" HandleID="k8s-pod-network.34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Workload="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" Jul 14 21:58:50.022630 env[1319]: 2025-07-14 21:58:49.976 [INFO][5407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:50.022630 env[1319]: 2025-07-14 21:58:49.976 [INFO][5407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:50.022630 env[1319]: 2025-07-14 21:58:50.017 [WARNING][5407] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" HandleID="k8s-pod-network.34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Workload="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" Jul 14 21:58:50.022630 env[1319]: 2025-07-14 21:58:50.017 [INFO][5407] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" HandleID="k8s-pod-network.34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Workload="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" Jul 14 21:58:50.022630 env[1319]: 2025-07-14 21:58:50.018 [INFO][5407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:50.022630 env[1319]: 2025-07-14 21:58:50.020 [INFO][5397] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Jul 14 21:58:50.023074 env[1319]: time="2025-07-14T21:58:50.022667737Z" level=info msg="TearDown network for sandbox \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\" successfully" Jul 14 21:58:50.023074 env[1319]: time="2025-07-14T21:58:50.022700858Z" level=info msg="StopPodSandbox for \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\" returns successfully" Jul 14 21:58:50.023326 env[1319]: time="2025-07-14T21:58:50.023298101Z" level=info msg="RemovePodSandbox for \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\"" Jul 14 21:58:50.023438 env[1319]: time="2025-07-14T21:58:50.023400021Z" level=info msg="Forcibly stopping sandbox \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\"" Jul 14 21:58:50.094537 env[1319]: 2025-07-14 21:58:50.061 [WARNING][5425] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0", GenerateName:"calico-kube-controllers-67865bb6d5-", Namespace:"calico-system", SelfLink:"", UID:"c1a1b271-e606-49ed-b47b-b98b88fdbed2", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 58, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67865bb6d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1775a4753e562f4f662b267787e2d4d205286a28cbab4197bc4cd34e931125f4", Pod:"calico-kube-controllers-67865bb6d5-jb527", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3dc9a25054f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:50.094537 env[1319]: 2025-07-14 21:58:50.061 [INFO][5425] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Jul 14 21:58:50.094537 env[1319]: 2025-07-14 21:58:50.061 [INFO][5425] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" iface="eth0" netns="" Jul 14 21:58:50.094537 env[1319]: 2025-07-14 21:58:50.061 [INFO][5425] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Jul 14 21:58:50.094537 env[1319]: 2025-07-14 21:58:50.061 [INFO][5425] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Jul 14 21:58:50.094537 env[1319]: 2025-07-14 21:58:50.080 [INFO][5434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" HandleID="k8s-pod-network.34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Workload="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" Jul 14 21:58:50.094537 env[1319]: 2025-07-14 21:58:50.081 [INFO][5434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:50.094537 env[1319]: 2025-07-14 21:58:50.081 [INFO][5434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:50.094537 env[1319]: 2025-07-14 21:58:50.089 [WARNING][5434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" HandleID="k8s-pod-network.34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Workload="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" Jul 14 21:58:50.094537 env[1319]: 2025-07-14 21:58:50.089 [INFO][5434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" HandleID="k8s-pod-network.34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Workload="localhost-k8s-calico--kube--controllers--67865bb6d5--jb527-eth0" Jul 14 21:58:50.094537 env[1319]: 2025-07-14 21:58:50.091 [INFO][5434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:50.094537 env[1319]: 2025-07-14 21:58:50.092 [INFO][5425] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879" Jul 14 21:58:50.095053 env[1319]: time="2025-07-14T21:58:50.095017413Z" level=info msg="TearDown network for sandbox \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\" successfully" Jul 14 21:58:50.098083 env[1319]: time="2025-07-14T21:58:50.098043350Z" level=info msg="RemovePodSandbox \"34e6cfb6dcd02fabef4e10fa5af5fdffe31d90bfd85db8a13da672a1edd21879\" returns successfully" Jul 14 21:58:50.098705 env[1319]: time="2025-07-14T21:58:50.098673473Z" level=info msg="StopPodSandbox for \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\"" Jul 14 21:58:50.182243 env[1319]: 2025-07-14 21:58:50.142 [WARNING][5451] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7023e1db-2106-48dc-85a1-3f1e832bd4ba", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c", Pod:"coredns-7c65d6cfc9-tbjx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif639d6b81c0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:50.182243 env[1319]: 2025-07-14 21:58:50.143 [INFO][5451] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Jul 14 21:58:50.182243 env[1319]: 2025-07-14 21:58:50.143 [INFO][5451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" iface="eth0" netns="" Jul 14 21:58:50.182243 env[1319]: 2025-07-14 21:58:50.143 [INFO][5451] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Jul 14 21:58:50.182243 env[1319]: 2025-07-14 21:58:50.143 [INFO][5451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Jul 14 21:58:50.182243 env[1319]: 2025-07-14 21:58:50.164 [INFO][5459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" HandleID="k8s-pod-network.419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Workload="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" Jul 14 21:58:50.182243 env[1319]: 2025-07-14 21:58:50.164 [INFO][5459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:50.182243 env[1319]: 2025-07-14 21:58:50.164 [INFO][5459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:50.182243 env[1319]: 2025-07-14 21:58:50.173 [WARNING][5459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" HandleID="k8s-pod-network.419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Workload="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" Jul 14 21:58:50.182243 env[1319]: 2025-07-14 21:58:50.173 [INFO][5459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" HandleID="k8s-pod-network.419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Workload="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" Jul 14 21:58:50.182243 env[1319]: 2025-07-14 21:58:50.174 [INFO][5459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:50.182243 env[1319]: 2025-07-14 21:58:50.176 [INFO][5451] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Jul 14 21:58:50.182243 env[1319]: time="2025-07-14T21:58:50.180731203Z" level=info msg="TearDown network for sandbox \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\" successfully" Jul 14 21:58:50.182243 env[1319]: time="2025-07-14T21:58:50.180764123Z" level=info msg="StopPodSandbox for \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\" returns successfully" Jul 14 21:58:50.182827 env[1319]: time="2025-07-14T21:58:50.182788054Z" level=info msg="RemovePodSandbox for \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\"" Jul 14 21:58:50.182873 env[1319]: time="2025-07-14T21:58:50.182837334Z" level=info msg="Forcibly stopping sandbox \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\"" Jul 14 21:58:50.266470 env[1319]: 2025-07-14 21:58:50.219 [WARNING][5476] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7023e1db-2106-48dc-85a1-3f1e832bd4ba", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad50fbc4e88a3840c2165c645e1ad60f4d200eccda8cf071e7d36cab0b097f0c", Pod:"coredns-7c65d6cfc9-tbjx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif639d6b81c0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:50.266470 env[1319]: 2025-07-14 21:58:50.219 [INFO][5476] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Jul 14 21:58:50.266470 env[1319]: 2025-07-14 21:58:50.220 [INFO][5476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" iface="eth0" netns="" Jul 14 21:58:50.266470 env[1319]: 2025-07-14 21:58:50.220 [INFO][5476] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Jul 14 21:58:50.266470 env[1319]: 2025-07-14 21:58:50.220 [INFO][5476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Jul 14 21:58:50.266470 env[1319]: 2025-07-14 21:58:50.242 [INFO][5485] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" HandleID="k8s-pod-network.419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Workload="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" Jul 14 21:58:50.266470 env[1319]: 2025-07-14 21:58:50.242 [INFO][5485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:50.266470 env[1319]: 2025-07-14 21:58:50.242 [INFO][5485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:50.266470 env[1319]: 2025-07-14 21:58:50.261 [WARNING][5485] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" HandleID="k8s-pod-network.419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Workload="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" Jul 14 21:58:50.266470 env[1319]: 2025-07-14 21:58:50.261 [INFO][5485] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" HandleID="k8s-pod-network.419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Workload="localhost-k8s-coredns--7c65d6cfc9--tbjx5-eth0" Jul 14 21:58:50.266470 env[1319]: 2025-07-14 21:58:50.263 [INFO][5485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:50.266470 env[1319]: 2025-07-14 21:58:50.264 [INFO][5476] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5" Jul 14 21:58:50.267092 env[1319]: time="2025-07-14T21:58:50.266515072Z" level=info msg="TearDown network for sandbox \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\" successfully" Jul 14 21:58:50.269313 env[1319]: time="2025-07-14T21:58:50.269274727Z" level=info msg="RemovePodSandbox \"419f7f38ff00df4627cb0daf5f1df44c60b86128349fd752edbeec1e9db942b5\" returns successfully" Jul 14 21:58:50.269809 env[1319]: time="2025-07-14T21:58:50.269776370Z" level=info msg="StopPodSandbox for \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\"" Jul 14 21:58:50.394504 env[1319]: 2025-07-14 21:58:50.314 [WARNING][5502] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a8fc1316-6b04-4d95-89ba-2535a5175aa9", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4", Pod:"coredns-7c65d6cfc9-gzbm6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali298ffd041b9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:50.394504 env[1319]: 2025-07-14 21:58:50.315 [INFO][5502] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Jul 14 21:58:50.394504 env[1319]: 2025-07-14 21:58:50.315 [INFO][5502] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" iface="eth0" netns="" Jul 14 21:58:50.394504 env[1319]: 2025-07-14 21:58:50.315 [INFO][5502] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Jul 14 21:58:50.394504 env[1319]: 2025-07-14 21:58:50.315 [INFO][5502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Jul 14 21:58:50.394504 env[1319]: 2025-07-14 21:58:50.335 [INFO][5510] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" HandleID="k8s-pod-network.4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Workload="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" Jul 14 21:58:50.394504 env[1319]: 2025-07-14 21:58:50.335 [INFO][5510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:50.394504 env[1319]: 2025-07-14 21:58:50.335 [INFO][5510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:50.394504 env[1319]: 2025-07-14 21:58:50.365 [WARNING][5510] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" HandleID="k8s-pod-network.4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Workload="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" Jul 14 21:58:50.394504 env[1319]: 2025-07-14 21:58:50.366 [INFO][5510] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" HandleID="k8s-pod-network.4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Workload="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" Jul 14 21:58:50.394504 env[1319]: 2025-07-14 21:58:50.368 [INFO][5510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:50.394504 env[1319]: 2025-07-14 21:58:50.392 [INFO][5502] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Jul 14 21:58:50.395041 env[1319]: time="2025-07-14T21:58:50.394532293Z" level=info msg="TearDown network for sandbox \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\" successfully" Jul 14 21:58:50.395041 env[1319]: time="2025-07-14T21:58:50.394564253Z" level=info msg="StopPodSandbox for \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\" returns successfully" Jul 14 21:58:50.395093 env[1319]: time="2025-07-14T21:58:50.395043856Z" level=info msg="RemovePodSandbox for \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\"" Jul 14 21:58:50.395119 env[1319]: time="2025-07-14T21:58:50.395074856Z" level=info msg="Forcibly stopping sandbox \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\"" Jul 14 21:58:50.486134 env[1319]: 2025-07-14 21:58:50.440 [WARNING][5528] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a8fc1316-6b04-4d95-89ba-2535a5175aa9", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5d6d34f691c4f8861cbaaa90500d0cbc2d861a0a2a9f4993c53f2f82537e1e4", Pod:"coredns-7c65d6cfc9-gzbm6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali298ffd041b9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:58:50.486134 env[1319]: 2025-07-14 21:58:50.440 [INFO][5528] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Jul 14 21:58:50.486134 env[1319]: 2025-07-14 21:58:50.440 [INFO][5528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" iface="eth0" netns="" Jul 14 21:58:50.486134 env[1319]: 2025-07-14 21:58:50.440 [INFO][5528] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Jul 14 21:58:50.486134 env[1319]: 2025-07-14 21:58:50.440 [INFO][5528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Jul 14 21:58:50.486134 env[1319]: 2025-07-14 21:58:50.471 [INFO][5536] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" HandleID="k8s-pod-network.4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Workload="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" Jul 14 21:58:50.486134 env[1319]: 2025-07-14 21:58:50.471 [INFO][5536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:58:50.486134 env[1319]: 2025-07-14 21:58:50.471 [INFO][5536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:58:50.486134 env[1319]: 2025-07-14 21:58:50.481 [WARNING][5536] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" HandleID="k8s-pod-network.4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Workload="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" Jul 14 21:58:50.486134 env[1319]: 2025-07-14 21:58:50.481 [INFO][5536] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" HandleID="k8s-pod-network.4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Workload="localhost-k8s-coredns--7c65d6cfc9--gzbm6-eth0" Jul 14 21:58:50.486134 env[1319]: 2025-07-14 21:58:50.482 [INFO][5536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:58:50.486134 env[1319]: 2025-07-14 21:58:50.484 [INFO][5528] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705" Jul 14 21:58:50.486134 env[1319]: time="2025-07-14T21:58:50.486080874Z" level=info msg="TearDown network for sandbox \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\" successfully" Jul 14 21:58:50.515627 env[1319]: time="2025-07-14T21:58:50.515552595Z" level=info msg="RemovePodSandbox \"4cc597f4c9089103a9c183b9b7a7c9f4ebf52c9b77c1003007a203858b963705\" returns successfully" Jul 14 21:58:51.191475 kernel: kauditd_printk_skb: 8 callbacks suppressed Jul 14 21:58:51.191633 kernel: audit: type=1325 audit(1752530331.185:444): table=filter:128 family=2 entries=9 op=nft_register_rule pid=5588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:51.191663 kernel: audit: type=1300 audit(1752530331.185:444): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffcf8631c0 a2=0 a3=1 items=0 ppid=2295 pid=5588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:51.185000 audit[5588]: NETFILTER_CFG table=filter:128 family=2 entries=9 op=nft_register_rule pid=5588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:51.185000 audit[5588]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffcf8631c0 a2=0 a3=1 items=0 ppid=2295 pid=5588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:51.185000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:51.194117 kernel: audit: type=1327 audit(1752530331.185:444): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:51.195000 audit[5588]: NETFILTER_CFG table=nat:129 family=2 entries=31 op=nft_register_chain pid=5588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:51.195000 audit[5588]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=ffffcf8631c0 a2=0 a3=1 items=0 ppid=2295 pid=5588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:51.201813 kernel: audit: type=1325 audit(1752530331.195:445): table=nat:129 family=2 entries=31 op=nft_register_chain pid=5588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:58:51.201900 kernel: audit: type=1300 audit(1752530331.195:445): arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=ffffcf8631c0 a2=0 a3=1 items=0 ppid=2295 pid=5588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:58:51.195000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:58:51.209078 kernel: audit: type=1327 audit(1752530331.195:445): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:59:00.903190 kubelet[2191]: E0714 21:59:00.903141 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:59:01.767371 systemd[1]: Started sshd@7-10.0.0.75:22-10.0.0.1:41506.service. Jul 14 21:59:01.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.75:22-10.0.0.1:41506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:01.770612 kernel: audit: type=1130 audit(1752530341.766:446): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.75:22-10.0.0.1:41506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:01.833546 sshd[5616]: Accepted publickey for core from 10.0.0.1 port 41506 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:59:01.832000 audit[5616]: USER_ACCT pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:01.835000 audit[5616]: CRED_ACQ pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:01.838915 kernel: audit: type=1101 audit(1752530341.832:447): pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:01.838985 kernel: audit: type=1103 audit(1752530341.835:448): pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:01.839010 kernel: audit: type=1006 audit(1752530341.835:449): pid=5616 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jul 14 21:59:01.839285 sshd[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:59:01.840361 kernel: audit: type=1300 audit(1752530341.835:449): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3ce4b50 a2=3 a3=1 items=0 ppid=1 pid=5616 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:01.835000 audit[5616]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3ce4b50 a2=3 a3=1 items=0 ppid=1 pid=5616 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:01.835000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:01.843748 kernel: audit: type=1327 audit(1752530341.835:449): proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:01.848514 systemd-logind[1305]: New session 8 of user core. Jul 14 21:59:01.849390 systemd[1]: Started session-8.scope. Jul 14 21:59:01.852000 audit[5616]: USER_START pid=5616 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:01.854000 audit[5619]: CRED_ACQ pid=5619 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:01.858363 kernel: audit: type=1105 audit(1752530341.852:450): pid=5616 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:01.858416 kernel: audit: type=1103 audit(1752530341.854:451): pid=5619 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:02.224679 sshd[5616]: pam_unix(sshd:session): session closed for user core Jul 14 21:59:02.226000 audit[5616]: USER_END pid=5616 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:02.228246 systemd[1]: sshd@7-10.0.0.75:22-10.0.0.1:41506.service: Deactivated successfully. Jul 14 21:59:02.229486 systemd-logind[1305]: Session 8 logged out. Waiting for processes to exit. Jul 14 21:59:02.229536 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 21:59:02.226000 audit[5616]: CRED_DISP pid=5616 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:02.230477 systemd-logind[1305]: Removed session 8. Jul 14 21:59:02.231772 kernel: audit: type=1106 audit(1752530342.226:452): pid=5616 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:02.231837 kernel: audit: type=1104 audit(1752530342.226:453): pid=5616 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:02.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.75:22-10.0.0.1:41506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:04.904290 kubelet[2191]: E0714 21:59:04.904253 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:59:06.902943 kubelet[2191]: E0714 21:59:06.902908 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:59:07.227617 systemd[1]: Started sshd@8-10.0.0.75:22-10.0.0.1:32768.service. Jul 14 21:59:07.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.75:22-10.0.0.1:32768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:07.230465 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 14 21:59:07.230542 kernel: audit: type=1130 audit(1752530347.227:455): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.75:22-10.0.0.1:32768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:07.270000 audit[5631]: USER_ACCT pid=5631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:07.270804 sshd[5631]: Accepted publickey for core from 10.0.0.1 port 32768 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:59:07.272392 sshd[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:59:07.271000 audit[5631]: CRED_ACQ pid=5631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:07.274953 kernel: audit: type=1101 audit(1752530347.270:456): pid=5631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:07.275024 kernel: audit: type=1103 audit(1752530347.271:457): pid=5631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:07.275051 kernel: audit: type=1006 audit(1752530347.271:458): pid=5631 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jul 14 21:59:07.276764 kernel: audit: type=1300 audit(1752530347.271:458): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffde1af450 a2=3 a3=1 items=0 ppid=1 pid=5631 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:07.271000 audit[5631]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffde1af450 a2=3 a3=1 items=0 ppid=1 pid=5631 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:07.277153 systemd[1]: Started session-9.scope. Jul 14 21:59:07.277341 systemd-logind[1305]: New session 9 of user core. Jul 14 21:59:07.278955 kernel: audit: type=1327 audit(1752530347.271:458): proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:07.271000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:07.283000 audit[5631]: USER_START pid=5631 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:07.284000 audit[5634]: CRED_ACQ pid=5634 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:07.288444 kernel: audit: type=1105 audit(1752530347.283:459): pid=5631 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:07.288509 kernel: audit: type=1103 audit(1752530347.284:460): pid=5634 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:07.453799 sshd[5631]: pam_unix(sshd:session): session closed for user core Jul 14 21:59:07.454000 audit[5631]: USER_END pid=5631 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:07.455000 audit[5631]: CRED_DISP pid=5631 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:07.457640 systemd[1]: sshd@8-10.0.0.75:22-10.0.0.1:32768.service: Deactivated successfully. Jul 14 21:59:07.458435 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 21:59:07.459436 kernel: audit: type=1106 audit(1752530347.454:461): pid=5631 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:07.459489 kernel: audit: type=1104 audit(1752530347.455:462): pid=5631 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:07.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.75:22-10.0.0.1:32768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:07.459942 systemd-logind[1305]: Session 9 logged out. Waiting for processes to exit. Jul 14 21:59:07.460622 systemd-logind[1305]: Removed session 9. Jul 14 21:59:11.067274 kubelet[2191]: I0714 21:59:11.067229 2191 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:59:11.146000 audit[5653]: NETFILTER_CFG table=filter:130 family=2 entries=8 op=nft_register_rule pid=5653 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:59:11.146000 audit[5653]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffdf6944d0 a2=0 a3=1 items=0 ppid=2295 pid=5653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:11.146000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:59:11.157000 audit[5653]: NETFILTER_CFG table=nat:131 family=2 entries=38 op=nft_register_chain pid=5653 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 21:59:11.157000 audit[5653]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12772 a0=3 a1=ffffdf6944d0 a2=0 a3=1 items=0 ppid=2295 pid=5653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:11.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 21:59:11.902519 kubelet[2191]: E0714 21:59:11.902471 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:59:12.456920 systemd[1]: Started sshd@9-10.0.0.75:22-10.0.0.1:39706.service. Jul 14 21:59:12.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.75:22-10.0.0.1:39706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:12.459961 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 14 21:59:12.460049 kernel: audit: type=1130 audit(1752530352.456:466): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.75:22-10.0.0.1:39706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:12.495000 audit[5654]: USER_ACCT pid=5654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:12.496775 sshd[5654]: Accepted publickey for core from 10.0.0.1 port 39706 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:59:12.498210 sshd[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:59:12.497000 audit[5654]: CRED_ACQ pid=5654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:12.501009 kernel: audit: type=1101 audit(1752530352.495:467): pid=5654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:12.501065 kernel: audit: type=1103 audit(1752530352.497:468): pid=5654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:12.501085 kernel: audit: type=1006 audit(1752530352.497:469): pid=5654 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jul 14 21:59:12.497000 audit[5654]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0f63cf0 a2=3 a3=1 items=0 ppid=1 pid=5654 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:12.502667 systemd-logind[1305]: New session 10 of user core. Jul 14 21:59:12.502934 systemd[1]: Started session-10.scope. Jul 14 21:59:12.504627 kernel: audit: type=1300 audit(1752530352.497:469): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0f63cf0 a2=3 a3=1 items=0 ppid=1 pid=5654 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:12.504665 kernel: audit: type=1327 audit(1752530352.497:469): proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:12.497000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:12.506000 audit[5654]: USER_START pid=5654 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:12.510000 audit[5657]: CRED_ACQ pid=5657 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:12.512969 kernel: audit: type=1105 audit(1752530352.506:470): pid=5654 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:12.513028 kernel: audit: type=1103 audit(1752530352.510:471): pid=5657 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:12.628774 sshd[5654]: pam_unix(sshd:session): session closed for user core Jul 14 21:59:12.632653 kernel: audit: type=1106 audit(1752530352.629:472): pid=5654 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:12.629000 audit[5654]: USER_END pid=5654 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:12.632000 audit[5654]: CRED_DISP pid=5654 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:12.635620 kernel: audit: type=1104 audit(1752530352.632:473): pid=5654 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:12.636190 systemd[1]: sshd@9-10.0.0.75:22-10.0.0.1:39706.service: Deactivated successfully. Jul 14 21:59:12.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.75:22-10.0.0.1:39706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:12.637356 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 21:59:12.637360 systemd-logind[1305]: Session 10 logged out. Waiting for processes to exit. Jul 14 21:59:12.638274 systemd-logind[1305]: Removed session 10. Jul 14 21:59:13.897989 systemd[1]: run-containerd-runc-k8s.io-92ad90cc85a743a68a530f8402f9f31f0d3df3e158813ecbd6f2fbacf0a6a0c9-runc.xFjZ30.mount: Deactivated successfully. Jul 14 21:59:17.631717 systemd[1]: Started sshd@10-10.0.0.75:22-10.0.0.1:39714.service. Jul 14 21:59:17.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.75:22-10.0.0.1:39714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:17.634574 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 14 21:59:17.634653 kernel: audit: type=1130 audit(1752530357.631:475): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.75:22-10.0.0.1:39714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:17.677000 audit[5693]: USER_ACCT pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:17.678079 sshd[5693]: Accepted publickey for core from 10.0.0.1 port 39714 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:59:17.679271 sshd[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:59:17.677000 audit[5693]: CRED_ACQ pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:17.682336 kernel: audit: type=1101 audit(1752530357.677:476): pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:17.682400 kernel: audit: type=1103 audit(1752530357.677:477): pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:17.682421 kernel: audit: type=1006 audit(1752530357.677:478): pid=5693 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jul 14 21:59:17.677000 audit[5693]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe2a06870 a2=3 a3=1 items=0 ppid=1 pid=5693 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:17.685855 kernel: audit: type=1300 audit(1752530357.677:478): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe2a06870 a2=3 a3=1 items=0 ppid=1 pid=5693 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:17.677000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:17.686866 kernel: audit: type=1327 audit(1752530357.677:478): proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:17.690834 systemd-logind[1305]: New session 11 of user core. Jul 14 21:59:17.691063 systemd[1]: Started session-11.scope. Jul 14 21:59:17.694000 audit[5693]: USER_START pid=5693 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:17.697000 audit[5696]: CRED_ACQ pid=5696 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:17.700189 kernel: audit: type=1105 audit(1752530357.694:479): pid=5693 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:17.700258 kernel: audit: type=1103 audit(1752530357.697:480): pid=5696 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:17.828752 sshd[5693]: pam_unix(sshd:session): session closed for user core Jul 14 21:59:17.829000 audit[5693]: USER_END pid=5693 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:17.831089 systemd[1]: sshd@10-10.0.0.75:22-10.0.0.1:39714.service: Deactivated successfully. Jul 14 21:59:17.831928 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 21:59:17.829000 audit[5693]: CRED_DISP pid=5693 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:17.832685 systemd-logind[1305]: Session 11 logged out. Waiting for processes to exit. Jul 14 21:59:17.833442 systemd-logind[1305]: Removed session 11. Jul 14 21:59:17.834306 kernel: audit: type=1106 audit(1752530357.829:481): pid=5693 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:17.834372 kernel: audit: type=1104 audit(1752530357.829:482): pid=5693 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:17.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.75:22-10.0.0.1:39714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:19.903044 kubelet[2191]: E0714 21:59:19.903001 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:59:21.054160 systemd[1]: run-containerd-runc-k8s.io-a6ee39b3ec3486e9d5db271dd640574a3a4514566ddcd8bd17d4d8f19555f555-runc.hhKCFO.mount: Deactivated successfully. Jul 14 21:59:21.069864 systemd[1]: run-containerd-runc-k8s.io-17e2162caf8f4400320837e77acb7df78689f9c624f91cc504567f498f7337f8-runc.4CNE3K.mount: Deactivated successfully. Jul 14 21:59:22.833633 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 14 21:59:22.833763 kernel: audit: type=1130 audit(1752530362.832:484): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.75:22-10.0.0.1:50018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:22.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.75:22-10.0.0.1:50018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:22.832974 systemd[1]: Started sshd@11-10.0.0.75:22-10.0.0.1:50018.service. Jul 14 21:59:22.882563 kernel: audit: type=1101 audit(1752530362.877:485): pid=5751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:22.882690 kernel: audit: type=1103 audit(1752530362.878:486): pid=5751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:22.877000 audit[5751]: USER_ACCT pid=5751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:22.878000 audit[5751]: CRED_ACQ pid=5751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:22.879339 sshd[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:59:22.883128 sshd[5751]: Accepted publickey for core from 10.0.0.1 port 50018 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:59:22.884733 kernel: audit: type=1006 audit(1752530362.878:487): pid=5751 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jul 14 21:59:22.878000 audit[5751]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd14512d0 a2=3 a3=1 items=0 ppid=1 pid=5751 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:22.887468 kernel: audit: type=1300 audit(1752530362.878:487): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd14512d0 a2=3 a3=1 items=0 ppid=1 pid=5751 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:22.878000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:22.888670 kernel: audit: type=1327 audit(1752530362.878:487): proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:22.890294 systemd[1]: Started session-12.scope. Jul 14 21:59:22.890689 systemd-logind[1305]: New session 12 of user core. Jul 14 21:59:22.899000 audit[5751]: USER_START pid=5751 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:22.903617 kernel: audit: type=1105 audit(1752530362.899:488): pid=5751 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:22.903000 audit[5754]: CRED_ACQ pid=5754 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:22.906630 kernel: audit: type=1103 audit(1752530362.903:489): pid=5754 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:23.085696 sshd[5751]: pam_unix(sshd:session): session closed for user core Jul 14 21:59:23.087000 audit[5751]: USER_END pid=5751 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:23.087000 audit[5751]: CRED_DISP pid=5751 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:23.095106 kernel: audit: type=1106 audit(1752530363.087:490): pid=5751 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:23.095193 kernel: audit: type=1104 audit(1752530363.087:491): pid=5751 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:23.097773 systemd[1]: Started sshd@12-10.0.0.75:22-10.0.0.1:50024.service. Jul 14 21:59:23.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.75:22-10.0.0.1:50024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:23.098485 systemd[1]: sshd@11-10.0.0.75:22-10.0.0.1:50018.service: Deactivated successfully. Jul 14 21:59:23.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.75:22-10.0.0.1:50018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:23.100295 systemd-logind[1305]: Session 12 logged out. Waiting for processes to exit. Jul 14 21:59:23.100296 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 21:59:23.105291 systemd-logind[1305]: Removed session 12. Jul 14 21:59:23.129000 audit[5765]: USER_ACCT pid=5765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:23.130045 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 50024 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:59:23.130000 audit[5765]: CRED_ACQ pid=5765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:23.130000 audit[5765]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4c1bb00 a2=3 a3=1 items=0 ppid=1 pid=5765 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:23.130000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:23.131243 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:59:23.135330 systemd-logind[1305]: New session 13 of user core. Jul 14 21:59:23.139000 audit[5765]: USER_START pid=5765 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:23.135788 systemd[1]: Started session-13.scope. Jul 14 21:59:23.140000 audit[5770]: CRED_ACQ pid=5770 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:23.297340 sshd[5765]: pam_unix(sshd:session): session closed for user core Jul 14 21:59:23.297000 audit[5765]: USER_END pid=5765 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:23.297000 audit[5765]: CRED_DISP pid=5765 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:23.300990 systemd[1]: Started sshd@13-10.0.0.75:22-10.0.0.1:50026.service. Jul 14 21:59:23.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.75:22-10.0.0.1:50026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:23.307029 systemd-logind[1305]: Session 13 logged out. Waiting for processes to exit. Jul 14 21:59:23.307172 systemd[1]: sshd@12-10.0.0.75:22-10.0.0.1:50024.service: Deactivated successfully. Jul 14 21:59:23.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.75:22-10.0.0.1:50024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:23.308165 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 21:59:23.308656 systemd-logind[1305]: Removed session 13. Jul 14 21:59:23.355000 audit[5778]: USER_ACCT pid=5778 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:23.356029 sshd[5778]: Accepted publickey for core from 10.0.0.1 port 50026 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:59:23.356000 audit[5778]: CRED_ACQ pid=5778 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:23.356000 audit[5778]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd2dc3500 a2=3 a3=1 items=0 ppid=1 pid=5778 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:23.356000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:23.357352 sshd[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:59:23.361538 systemd-logind[1305]: New session 14 of user core. Jul 14 21:59:23.362044 systemd[1]: Started session-14.scope. Jul 14 21:59:23.366000 audit[5778]: USER_START pid=5778 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:23.367000 audit[5783]: CRED_ACQ pid=5783 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:23.481280 sshd[5778]: pam_unix(sshd:session): session closed for user core Jul 14 21:59:23.481000 audit[5778]: USER_END pid=5778 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:23.481000 audit[5778]: CRED_DISP pid=5778 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:23.485900 systemd[1]: sshd@13-10.0.0.75:22-10.0.0.1:50026.service: Deactivated successfully. Jul 14 21:59:23.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.75:22-10.0.0.1:50026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:23.488827 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 21:59:23.488878 systemd-logind[1305]: Session 14 logged out. Waiting for processes to exit. Jul 14 21:59:23.489947 systemd-logind[1305]: Removed session 14. Jul 14 21:59:28.487887 kernel: kauditd_printk_skb: 23 callbacks suppressed Jul 14 21:59:28.488031 kernel: audit: type=1130 audit(1752530368.484:511): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.75:22-10.0.0.1:50038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:28.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.75:22-10.0.0.1:50038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:28.484908 systemd[1]: Started sshd@14-10.0.0.75:22-10.0.0.1:50038.service. Jul 14 21:59:28.523000 audit[5801]: USER_ACCT pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:28.523838 sshd[5801]: Accepted publickey for core from 10.0.0.1 port 50038 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:59:28.524839 sshd[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:59:28.523000 audit[5801]: CRED_ACQ pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:28.528208 kernel: audit: type=1101 audit(1752530368.523:512): pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:28.528258 kernel: audit: type=1103 audit(1752530368.523:513): pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:28.528276 kernel: audit: type=1006 audit(1752530368.523:514): pid=5801 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jul 14 21:59:28.530490 kernel: audit: type=1300 audit(1752530368.523:514): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffec0c2820 a2=3 a3=1 items=0 ppid=1 pid=5801 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:28.523000 audit[5801]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffec0c2820 a2=3 a3=1 items=0 ppid=1 pid=5801 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:28.530312 systemd[1]: Started session-15.scope. Jul 14 21:59:28.530901 systemd-logind[1305]: New session 15 of user core. Jul 14 21:59:28.523000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:28.532671 kernel: audit: type=1327 audit(1752530368.523:514): proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:28.535000 audit[5801]: USER_START pid=5801 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:28.536000 audit[5804]: CRED_ACQ pid=5804 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:28.540568 kernel: audit: type=1105 audit(1752530368.535:515): pid=5801 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:28.540623 kernel: audit: type=1103 audit(1752530368.536:516): pid=5804 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:28.645490 sshd[5801]: pam_unix(sshd:session): session closed for user core Jul 14 21:59:28.651360 kernel: audit: type=1106 audit(1752530368.645:517): pid=5801 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:28.651420 kernel: audit: type=1104 audit(1752530368.646:518): pid=5801 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:28.645000 audit[5801]: USER_END pid=5801 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:28.646000 audit[5801]: CRED_DISP pid=5801 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:28.648298 systemd-logind[1305]: Session 15 logged out. Waiting for processes to exit. Jul 14 21:59:28.648609 systemd[1]: sshd@14-10.0.0.75:22-10.0.0.1:50038.service: Deactivated successfully. Jul 14 21:59:28.649406 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 21:59:28.650286 systemd-logind[1305]: Removed session 15. Jul 14 21:59:28.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.75:22-10.0.0.1:50038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:33.649041 systemd[1]: Started sshd@15-10.0.0.75:22-10.0.0.1:40938.service. Jul 14 21:59:33.650000 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 14 21:59:33.650042 kernel: audit: type=1130 audit(1752530373.648:520): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.75:22-10.0.0.1:40938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:33.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.75:22-10.0.0.1:40938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:33.686000 audit[5816]: USER_ACCT pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:33.687484 sshd[5816]: Accepted publickey for core from 10.0.0.1 port 40938 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:59:33.689028 sshd[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:59:33.688000 audit[5816]: CRED_ACQ pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:33.691790 kernel: audit: type=1101 audit(1752530373.686:521): pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:33.691845 kernel: audit: type=1103 audit(1752530373.688:522): pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:33.691875 kernel: audit: type=1006 audit(1752530373.688:523): pid=5816 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jul 14 21:59:33.693130 systemd[1]: Started session-16.scope. Jul 14 21:59:33.688000 audit[5816]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffebcb6510 a2=3 a3=1 items=0 ppid=1 pid=5816 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:33.693297 systemd-logind[1305]: New session 16 of user core. Jul 14 21:59:33.695494 kernel: audit: type=1300 audit(1752530373.688:523): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffebcb6510 a2=3 a3=1 items=0 ppid=1 pid=5816 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:33.695560 kernel: audit: type=1327 audit(1752530373.688:523): proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:33.688000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:33.696000 audit[5816]: USER_START pid=5816 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:33.697000 audit[5819]: CRED_ACQ pid=5819 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:33.701658 kernel: audit: type=1105 audit(1752530373.696:524): pid=5816 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:33.701701 kernel: audit: type=1103 audit(1752530373.697:525): pid=5819 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:33.807199 sshd[5816]: pam_unix(sshd:session): session closed for user core Jul 14 21:59:33.807000 audit[5816]: USER_END pid=5816 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:33.810020 systemd[1]: sshd@15-10.0.0.75:22-10.0.0.1:40938.service: Deactivated successfully. Jul 14 21:59:33.811085 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 21:59:33.807000 audit[5816]: CRED_DISP pid=5816 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:33.811384 systemd-logind[1305]: Session 16 logged out. Waiting for processes to exit. Jul 14 21:59:33.812048 systemd-logind[1305]: Removed session 16. Jul 14 21:59:33.813190 kernel: audit: type=1106 audit(1752530373.807:526): pid=5816 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:33.813271 kernel: audit: type=1104 audit(1752530373.807:527): pid=5816 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:33.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.75:22-10.0.0.1:40938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:38.810639 systemd[1]: Started sshd@16-10.0.0.75:22-10.0.0.1:40944.service. Jul 14 21:59:38.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.75:22-10.0.0.1:40944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:38.813590 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 14 21:59:38.813677 kernel: audit: type=1130 audit(1752530378.809:529): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.75:22-10.0.0.1:40944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:38.852000 audit[5830]: USER_ACCT pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:38.854480 sshd[5830]: Accepted publickey for core from 10.0.0.1 port 40944 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:59:38.855000 audit[5830]: CRED_ACQ pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:38.857109 sshd[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:59:38.861407 kernel: audit: type=1101 audit(1752530378.852:530): pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:38.861502 kernel: audit: type=1103 audit(1752530378.855:531): pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:38.861526 kernel: audit: type=1006 audit(1752530378.855:532): pid=5830 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jul 14 21:59:38.861549 kernel: audit: type=1300 audit(1752530378.855:532): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc56024b0 a2=3 a3=1 items=0 ppid=1 pid=5830 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:38.855000 audit[5830]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc56024b0 a2=3 a3=1 items=0 ppid=1 pid=5830 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:38.861717 systemd[1]: Started session-17.scope. Jul 14 21:59:38.861999 systemd-logind[1305]: New session 17 of user core. Jul 14 21:59:38.855000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:38.864414 kernel: audit: type=1327 audit(1752530378.855:532): proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:38.865000 audit[5830]: USER_START pid=5830 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:38.871607 kernel: audit: type=1105 audit(1752530378.865:533): pid=5830 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:38.869000 audit[5833]: CRED_ACQ pid=5833 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:38.875016 kernel: audit: type=1103 audit(1752530378.869:534): pid=5833 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:38.998558 sshd[5830]: pam_unix(sshd:session): session closed for user core Jul 14 21:59:38.998000 audit[5830]: USER_END pid=5830 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:38.998000 audit[5830]: CRED_DISP pid=5830 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:39.004787 kernel: audit: type=1106 audit(1752530378.998:535): pid=5830 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:39.004858 kernel: audit: type=1104 audit(1752530378.998:536): pid=5830 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:39.005098 systemd[1]: sshd@16-10.0.0.75:22-10.0.0.1:40944.service: Deactivated successfully. Jul 14 21:59:39.005992 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 21:59:39.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.75:22-10.0.0.1:40944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:39.006612 systemd-logind[1305]: Session 17 logged out. Waiting for processes to exit. Jul 14 21:59:39.007339 systemd-logind[1305]: Removed session 17. Jul 14 21:59:44.001364 systemd[1]: Started sshd@17-10.0.0.75:22-10.0.0.1:54182.service. Jul 14 21:59:43.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.75:22-10.0.0.1:54182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:44.004371 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 14 21:59:44.004456 kernel: audit: type=1130 audit(1752530383.999:538): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.75:22-10.0.0.1:54182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:44.044000 audit[5867]: USER_ACCT pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:44.046780 sshd[5867]: Accepted publickey for core from 10.0.0.1 port 54182 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:59:44.049675 kernel: audit: type=1101 audit(1752530384.044:539): pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:44.048000 audit[5867]: CRED_ACQ pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:44.050421 sshd[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:59:44.053741 kernel: audit: type=1103 audit(1752530384.048:540): pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:44.053808 kernel: audit: type=1006 audit(1752530384.048:541): pid=5867 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jul 14 21:59:44.053836 kernel: audit: type=1300 audit(1752530384.048:541): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc092f660 a2=3 a3=1 items=0 ppid=1 pid=5867 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:44.048000 audit[5867]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc092f660 a2=3 a3=1 items=0 ppid=1 pid=5867 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:44.056502 kernel: audit: type=1327 audit(1752530384.048:541): proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:44.048000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:44.056151 systemd-logind[1305]: New session 18 of user core. Jul 14 21:59:44.057194 systemd[1]: Started session-18.scope. Jul 14 21:59:44.059000 audit[5867]: USER_START pid=5867 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:44.062000 audit[5870]: CRED_ACQ pid=5870 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:44.066652 kernel: audit: type=1105 audit(1752530384.059:542): pid=5867 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:44.066704 kernel: audit: type=1103 audit(1752530384.062:543): pid=5870 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:44.302698 sshd[5867]: pam_unix(sshd:session): session closed for user core Jul 14 21:59:44.303000 audit[5867]: USER_END pid=5867 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:44.305397 systemd[1]: Started sshd@18-10.0.0.75:22-10.0.0.1:54194.service. Jul 14 21:59:44.306562 systemd[1]: sshd@17-10.0.0.75:22-10.0.0.1:54182.service: Deactivated successfully. Jul 14 21:59:44.307375 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 21:59:44.309350 systemd-logind[1305]: Session 18 logged out. Waiting for processes to exit. Jul 14 21:59:44.310377 systemd-logind[1305]: Removed session 18. Jul 14 21:59:44.303000 audit[5867]: CRED_DISP pid=5867 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:44.312576 kernel: audit: type=1106 audit(1752530384.303:544): pid=5867 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:44.312660 kernel: audit: type=1104 audit(1752530384.303:545): pid=5867 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:44.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.75:22-10.0.0.1:54194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:44.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.75:22-10.0.0.1:54182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:44.346000 audit[5880]: USER_ACCT pid=5880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:44.347931 sshd[5880]: Accepted publickey for core from 10.0.0.1 port 54194 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:59:44.347000 audit[5880]: CRED_ACQ pid=5880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:44.347000 audit[5880]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffec18a1b0 a2=3 a3=1 items=0 ppid=1 pid=5880 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:44.347000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:44.349291 sshd[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:59:44.353922 systemd[1]: Started session-19.scope. Jul 14 21:59:44.354966 systemd-logind[1305]: New session 19 of user core. Jul 14 21:59:44.357000 audit[5880]: USER_START pid=5880 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:44.359000 audit[5885]: CRED_ACQ pid=5885 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:48.903753 kubelet[2191]: E0714 21:59:48.903716 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:59:51.075721 systemd[1]: run-containerd-runc-k8s.io-17e2162caf8f4400320837e77acb7df78689f9c624f91cc504567f498f7337f8-runc.pDeMZr.mount: Deactivated successfully. Jul 14 21:59:54.605631 systemd[1]: Started sshd@19-10.0.0.75:22-10.0.0.1:53424.service. Jul 14 21:59:54.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.75:22-10.0.0.1:53424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:54.606353 sshd[5880]: pam_unix(sshd:session): session closed for user core Jul 14 21:59:54.608452 kernel: kauditd_printk_skb: 9 callbacks suppressed Jul 14 21:59:54.608527 kernel: audit: type=1130 audit(1752530394.604:553): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.75:22-10.0.0.1:53424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:54.608553 kernel: audit: type=1106 audit(1752530394.605:554): pid=5880 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:54.605000 audit[5880]: USER_END pid=5880 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:54.605000 audit[5880]: CRED_DISP pid=5880 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:54.613638 kernel: audit: type=1104 audit(1752530394.605:555): pid=5880 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:54.616466 systemd[1]: sshd@18-10.0.0.75:22-10.0.0.1:54194.service: Deactivated successfully. Jul 14 21:59:54.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.75:22-10.0.0.1:54194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:54.617877 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 21:59:54.618567 systemd-logind[1305]: Session 19 logged out. Waiting for processes to exit. Jul 14 21:59:54.619546 systemd-logind[1305]: Removed session 19. Jul 14 21:59:54.619709 kernel: audit: type=1131 audit(1752530394.615:556): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.75:22-10.0.0.1:54194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 21:59:54.653000 audit[5960]: USER_ACCT pid=5960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:54.655280 sshd[5960]: Accepted publickey for core from 10.0.0.1 port 53424 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 21:59:54.656474 sshd[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:59:54.654000 audit[5960]: CRED_ACQ pid=5960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:54.659526 kernel: audit: type=1101 audit(1752530394.653:557): pid=5960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:54.659609 kernel: audit: type=1103 audit(1752530394.654:558): pid=5960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:54.659635 kernel: audit: type=1006 audit(1752530394.654:559): pid=5960 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jul 14 21:59:54.654000 audit[5960]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9247ac0 a2=3 a3=1 items=0 ppid=1 pid=5960 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:54.661019 systemd[1]: Started session-20.scope. Jul 14 21:59:54.662023 systemd-logind[1305]: New session 20 of user core. Jul 14 21:59:54.663079 kernel: audit: type=1300 audit(1752530394.654:559): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9247ac0 a2=3 a3=1 items=0 ppid=1 pid=5960 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 21:59:54.663171 kernel: audit: type=1327 audit(1752530394.654:559): proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:54.654000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 21:59:54.665000 audit[5960]: USER_START pid=5960 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:54.667000 audit[5965]: CRED_ACQ pid=5965 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:54.670619 kernel: audit: type=1105 audit(1752530394.665:560): pid=5960 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 21:59:58.903594 kubelet[2191]: E0714 21:59:58.903552 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:59:59.977807 systemd[1]: run-containerd-runc-k8s.io-17e2162caf8f4400320837e77acb7df78689f9c624f91cc504567f498f7337f8-runc.BAWKVI.mount: Deactivated successfully. Jul 14 22:00:10.902777 kubelet[2191]: E0714 22:00:10.902742 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:00:16.448000 audit[6043]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=6043 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:00:16.452079 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 14 22:00:16.452184 kernel: audit: type=1325 audit(1752530416.448:562): table=filter:132 family=2 entries=20 op=nft_register_rule pid=6043 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:00:16.452217 kernel: audit: type=1300 audit(1752530416.448:562): arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffdb7499a0 a2=0 a3=1 items=0 ppid=2295 pid=6043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:16.448000 audit[6043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffdb7499a0 a2=0 a3=1 items=0 ppid=2295 pid=6043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:16.448000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:00:16.456324 kernel: audit: type=1327 audit(1752530416.448:562): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:00:16.462000 audit[6043]: NETFILTER_CFG table=nat:133 family=2 entries=26 op=nft_register_rule pid=6043 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:00:16.462000 audit[6043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffdb7499a0 a2=0 a3=1 items=0 ppid=2295 pid=6043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:16.466201 systemd[1]: Started sshd@20-10.0.0.75:22-10.0.0.1:49918.service. Jul 14 22:00:16.467663 sshd[5960]: pam_unix(sshd:session): session closed for user core Jul 14 22:00:16.470105 kernel: audit: type=1325 audit(1752530416.462:563): table=nat:133 family=2 entries=26 op=nft_register_rule pid=6043 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:00:16.470145 kernel: audit: type=1300 audit(1752530416.462:563): arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffdb7499a0 a2=0 a3=1 items=0 ppid=2295 pid=6043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:16.470168 kernel: audit: type=1327 audit(1752530416.462:563): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:00:16.470187 kernel: audit: type=1130 audit(1752530416.465:564): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.75:22-10.0.0.1:49918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:16.462000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:00:16.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.75:22-10.0.0.1:49918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:16.474681 kernel: audit: type=1106 audit(1752530416.469:565): pid=5960 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:16.469000 audit[5960]: USER_END pid=5960 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:16.472020 systemd[1]: sshd@19-10.0.0.75:22-10.0.0.1:53424.service: Deactivated successfully. Jul 14 22:00:16.472945 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 22:00:16.473756 systemd-logind[1305]: Session 20 logged out. Waiting for processes to exit. Jul 14 22:00:16.474517 systemd-logind[1305]: Removed session 20. Jul 14 22:00:16.477663 kernel: audit: type=1104 audit(1752530416.469:566): pid=5960 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:16.477723 kernel: audit: type=1131 audit(1752530416.471:567): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.75:22-10.0.0.1:53424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:16.469000 audit[5960]: CRED_DISP pid=5960 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:16.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.75:22-10.0.0.1:53424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:16.489000 audit[6049]: NETFILTER_CFG table=filter:134 family=2 entries=32 op=nft_register_rule pid=6049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:00:16.489000 audit[6049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffc56efe10 a2=0 a3=1 items=0 ppid=2295 pid=6049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:16.489000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:00:16.495000 audit[6049]: NETFILTER_CFG table=nat:135 family=2 entries=26 op=nft_register_rule pid=6049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:00:16.495000 audit[6049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffc56efe10 a2=0 a3=1 items=0 ppid=2295 pid=6049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:16.495000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:00:16.514000 audit[6044]: USER_ACCT pid=6044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:16.515427 sshd[6044]: Accepted publickey for core from 10.0.0.1 port 49918 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 22:00:16.515000 audit[6044]: CRED_ACQ pid=6044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:16.515000 audit[6044]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff354afb0 a2=3 a3=1 items=0 ppid=1 pid=6044 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:16.515000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:00:16.516476 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:00:16.520466 systemd-logind[1305]: New session 21 of user core. Jul 14 22:00:16.520873 systemd[1]: Started session-21.scope. Jul 14 22:00:16.525000 audit[6044]: USER_START pid=6044 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:16.526000 audit[6051]: CRED_ACQ pid=6051 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:16.909233 sshd[6044]: pam_unix(sshd:session): session closed for user core Jul 14 22:00:16.909000 audit[6044]: USER_END pid=6044 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:16.909000 audit[6044]: CRED_DISP pid=6044 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:16.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.75:22-10.0.0.1:49932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:16.912065 systemd[1]: Started sshd@21-10.0.0.75:22-10.0.0.1:49932.service. Jul 14 22:00:16.917818 systemd[1]: sshd@20-10.0.0.75:22-10.0.0.1:49918.service: Deactivated successfully. Jul 14 22:00:16.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.75:22-10.0.0.1:49918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:16.927410 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 22:00:16.927728 systemd-logind[1305]: Session 21 logged out. Waiting for processes to exit. Jul 14 22:00:16.932797 systemd-logind[1305]: Removed session 21. Jul 14 22:00:16.958000 audit[6059]: USER_ACCT pid=6059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:16.959461 sshd[6059]: Accepted publickey for core from 10.0.0.1 port 49932 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 22:00:16.960000 audit[6059]: CRED_ACQ pid=6059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:16.960000 audit[6059]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffede4d120 a2=3 a3=1 items=0 ppid=1 pid=6059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:16.960000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:00:16.961108 sshd[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:00:16.965012 systemd-logind[1305]: New session 22 of user core. Jul 14 22:00:16.965411 systemd[1]: Started session-22.scope. Jul 14 22:00:16.968000 audit[6059]: USER_START pid=6059 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:16.970000 audit[6064]: CRED_ACQ pid=6064 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:17.143515 sshd[6059]: pam_unix(sshd:session): session closed for user core Jul 14 22:00:17.144000 audit[6059]: USER_END pid=6059 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:17.144000 audit[6059]: CRED_DISP pid=6059 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:17.146561 systemd[1]: sshd@21-10.0.0.75:22-10.0.0.1:49932.service: Deactivated successfully. Jul 14 22:00:17.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.75:22-10.0.0.1:49932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:17.147837 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 22:00:17.147871 systemd-logind[1305]: Session 22 logged out. Waiting for processes to exit. Jul 14 22:00:17.148902 systemd-logind[1305]: Removed session 22. Jul 14 22:00:19.902991 kubelet[2191]: E0714 22:00:19.902947 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:00:21.788000 audit[6119]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=6119 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:00:21.791597 kernel: kauditd_printk_skb: 27 callbacks suppressed Jul 14 22:00:21.791665 kernel: audit: type=1325 audit(1752530421.788:587): table=filter:136 family=2 entries=20 op=nft_register_rule pid=6119 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:00:21.791695 kernel: audit: type=1300 audit(1752530421.788:587): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffc19ccd30 a2=0 a3=1 items=0 ppid=2295 pid=6119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:21.788000 audit[6119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffc19ccd30 a2=0 a3=1 items=0 ppid=2295 pid=6119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:21.788000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:00:21.795693 kernel: audit: type=1327 audit(1752530421.788:587): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:00:21.798000 audit[6119]: NETFILTER_CFG table=nat:137 family=2 entries=110 op=nft_register_chain pid=6119 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:00:21.798000 audit[6119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffc19ccd30 a2=0 a3=1 items=0 ppid=2295 pid=6119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:21.803668 kernel: audit: type=1325 audit(1752530421.798:588): table=nat:137 family=2 entries=110 op=nft_register_chain pid=6119 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:00:21.803750 kernel: audit: type=1300 audit(1752530421.798:588): arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffc19ccd30 a2=0 a3=1 items=0 ppid=2295 pid=6119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:21.803803 kernel: audit: type=1327 audit(1752530421.798:588): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:00:21.798000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:00:22.145790 systemd[1]: Started sshd@22-10.0.0.75:22-10.0.0.1:49948.service. Jul 14 22:00:22.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.75:22-10.0.0.1:49948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:22.151920 kernel: audit: type=1130 audit(1752530422.145:589): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.75:22-10.0.0.1:49948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:22.183000 audit[6121]: USER_ACCT pid=6121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:22.184293 sshd[6121]: Accepted publickey for core from 10.0.0.1 port 49948 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 22:00:22.185846 sshd[6121]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:00:22.184000 audit[6121]: CRED_ACQ pid=6121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:22.191070 kernel: audit: type=1101 audit(1752530422.183:590): pid=6121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:22.191140 kernel: audit: type=1103 audit(1752530422.184:591): pid=6121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:22.192576 kernel: audit: type=1006 audit(1752530422.184:592): pid=6121 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jul 14 22:00:22.184000 audit[6121]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdff5fd90 a2=3 a3=1 items=0 ppid=1 pid=6121 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:22.184000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:00:22.197477 systemd[1]: Started session-23.scope. Jul 14 22:00:22.197865 systemd-logind[1305]: New session 23 of user core. Jul 14 22:00:22.202000 audit[6121]: USER_START pid=6121 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:22.204000 audit[6124]: CRED_ACQ pid=6124 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:22.317735 sshd[6121]: pam_unix(sshd:session): session closed for user core Jul 14 22:00:22.320000 audit[6121]: USER_END pid=6121 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:22.320000 audit[6121]: CRED_DISP pid=6121 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:22.322245 systemd[1]: sshd@22-10.0.0.75:22-10.0.0.1:49948.service: Deactivated successfully. Jul 14 22:00:22.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.75:22-10.0.0.1:49948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:22.323086 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 22:00:22.325339 systemd-logind[1305]: Session 23 logged out. Waiting for processes to exit. Jul 14 22:00:22.326591 systemd-logind[1305]: Removed session 23. Jul 14 22:00:26.903046 kubelet[2191]: E0714 22:00:26.903009 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:00:27.320868 systemd[1]: Started sshd@23-10.0.0.75:22-10.0.0.1:46054.service. Jul 14 22:00:27.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.75:22-10.0.0.1:46054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:27.321884 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 14 22:00:27.321933 kernel: audit: type=1130 audit(1752530427.319:598): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.75:22-10.0.0.1:46054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:27.358000 audit[6135]: USER_ACCT pid=6135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:27.359884 sshd[6135]: Accepted publickey for core from 10.0.0.1 port 46054 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 22:00:27.361463 sshd[6135]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:00:27.359000 audit[6135]: CRED_ACQ pid=6135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:27.364154 kernel: audit: type=1101 audit(1752530427.358:599): pid=6135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:27.364186 kernel: audit: type=1103 audit(1752530427.359:600): pid=6135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:27.364217 kernel: audit: type=1006 audit(1752530427.359:601): pid=6135 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jul 14 22:00:27.359000 audit[6135]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffde5f04b0 a2=3 a3=1 items=0 ppid=1 pid=6135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:27.366607 kernel: audit: type=1300 audit(1752530427.359:601): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffde5f04b0 a2=3 a3=1 items=0 ppid=1 pid=6135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:27.359000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:00:27.369665 kernel: audit: type=1327 audit(1752530427.359:601): proctitle=737368643A20636F7265205B707269765D Jul 14 22:00:27.371049 systemd-logind[1305]: New session 24 of user core. Jul 14 22:00:27.371851 systemd[1]: Started session-24.scope. Jul 14 22:00:27.373000 audit[6135]: USER_START pid=6135 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:27.376000 audit[6138]: CRED_ACQ pid=6138 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:27.380436 kernel: audit: type=1105 audit(1752530427.373:602): pid=6135 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:27.380488 kernel: audit: type=1103 audit(1752530427.376:603): pid=6138 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:27.483888 sshd[6135]: pam_unix(sshd:session): session closed for user core Jul 14 22:00:27.484000 audit[6135]: USER_END pid=6135 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:27.484000 audit[6135]: CRED_DISP pid=6135 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:27.490134 kernel: audit: type=1106 audit(1752530427.484:604): pid=6135 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:27.490185 kernel: audit: type=1104 audit(1752530427.484:605): pid=6135 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:27.490407 systemd[1]: sshd@23-10.0.0.75:22-10.0.0.1:46054.service: Deactivated successfully. Jul 14 22:00:27.491467 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 22:00:27.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.75:22-10.0.0.1:46054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:27.492088 systemd-logind[1305]: Session 24 logged out. Waiting for processes to exit. Jul 14 22:00:27.492820 systemd-logind[1305]: Removed session 24. Jul 14 22:00:30.902421 kubelet[2191]: E0714 22:00:30.902377 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:00:32.487256 systemd[1]: Started sshd@24-10.0.0.75:22-10.0.0.1:53548.service. Jul 14 22:00:32.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.75:22-10.0.0.1:53548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:32.488165 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 14 22:00:32.488214 kernel: audit: type=1130 audit(1752530432.485:607): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.75:22-10.0.0.1:53548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:32.524000 audit[6151]: USER_ACCT pid=6151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:32.526156 sshd[6151]: Accepted publickey for core from 10.0.0.1 port 53548 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 22:00:32.527630 sshd[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:00:32.525000 audit[6151]: CRED_ACQ pid=6151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:32.530527 kernel: audit: type=1101 audit(1752530432.524:608): pid=6151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:32.530577 kernel: audit: type=1103 audit(1752530432.525:609): pid=6151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:32.530620 kernel: audit: type=1006 audit(1752530432.525:610): pid=6151 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jul 14 22:00:32.531821 kernel: audit: type=1300 audit(1752530432.525:610): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffea5ed400 a2=3 a3=1 items=0 ppid=1 pid=6151 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:32.525000 audit[6151]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffea5ed400 a2=3 a3=1 items=0 ppid=1 pid=6151 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:32.532615 systemd[1]: Started session-25.scope. Jul 14 22:00:32.532810 systemd-logind[1305]: New session 25 of user core. Jul 14 22:00:32.534049 kernel: audit: type=1327 audit(1752530432.525:610): proctitle=737368643A20636F7265205B707269765D Jul 14 22:00:32.525000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:00:32.535000 audit[6151]: USER_START pid=6151 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:32.537000 audit[6154]: CRED_ACQ pid=6154 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:32.542037 kernel: audit: type=1105 audit(1752530432.535:611): pid=6151 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:32.542100 kernel: audit: type=1103 audit(1752530432.537:612): pid=6154 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:32.646320 sshd[6151]: pam_unix(sshd:session): session closed for user core Jul 14 22:00:32.645000 audit[6151]: USER_END pid=6151 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:32.649075 systemd-logind[1305]: Session 25 logged out. Waiting for processes to exit. Jul 14 22:00:32.649187 systemd[1]: sshd@24-10.0.0.75:22-10.0.0.1:53548.service: Deactivated successfully. Jul 14 22:00:32.645000 audit[6151]: CRED_DISP pid=6151 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:32.650043 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 22:00:32.650446 systemd-logind[1305]: Removed session 25. Jul 14 22:00:32.651949 kernel: audit: type=1106 audit(1752530432.645:613): pid=6151 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:32.651984 kernel: audit: type=1104 audit(1752530432.645:614): pid=6151 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:32.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.75:22-10.0.0.1:53548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:32.903059 kubelet[2191]: E0714 22:00:32.903024 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:00:37.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.75:22-10.0.0.1:53550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:37.649186 systemd[1]: Started sshd@25-10.0.0.75:22-10.0.0.1:53550.service. Jul 14 22:00:37.649982 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 14 22:00:37.650023 kernel: audit: type=1130 audit(1752530437.647:616): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.75:22-10.0.0.1:53550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:00:37.687000 audit[6165]: USER_ACCT pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:37.688852 sshd[6165]: Accepted publickey for core from 10.0.0.1 port 53550 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 22:00:37.690001 sshd[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:00:37.688000 audit[6165]: CRED_ACQ pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:37.693197 kernel: audit: type=1101 audit(1752530437.687:617): pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:37.693329 kernel: audit: type=1103 audit(1752530437.688:618): pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:37.694910 kernel: audit: type=1006 audit(1752530437.688:619): pid=6165 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jul 14 22:00:37.694959 kernel: audit: type=1300 audit(1752530437.688:619): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd7d18710 a2=3 a3=1 items=0 ppid=1 pid=6165 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:37.688000 audit[6165]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd7d18710 a2=3 a3=1 items=0 ppid=1 pid=6165 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:00:37.693686 systemd-logind[1305]: New session 26 of user core. Jul 14 22:00:37.694496 systemd[1]: Started session-26.scope. Jul 14 22:00:37.688000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:00:37.697610 kernel: audit: type=1327 audit(1752530437.688:619): proctitle=737368643A20636F7265205B707269765D Jul 14 22:00:37.697000 audit[6165]: USER_START pid=6165 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:37.698000 audit[6168]: CRED_ACQ pid=6168 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:37.703454 kernel: audit: type=1105 audit(1752530437.697:620): pid=6165 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:37.703518 kernel: audit: type=1103 audit(1752530437.698:621): pid=6168 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:37.803173 sshd[6165]: pam_unix(sshd:session): session closed for user core Jul 14 22:00:37.802000 audit[6165]: USER_END pid=6165 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:37.805866 systemd-logind[1305]: Session 26 logged out. Waiting for processes to exit. Jul 14 22:00:37.805985 systemd[1]: sshd@25-10.0.0.75:22-10.0.0.1:53550.service: Deactivated successfully. Jul 14 22:00:37.802000 audit[6165]: CRED_DISP pid=6165 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:37.806865 systemd[1]: session-26.scope: Deactivated successfully. Jul 14 22:00:37.807273 systemd-logind[1305]: Removed session 26. Jul 14 22:00:37.808876 kernel: audit: type=1106 audit(1752530437.802:622): pid=6165 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:37.809119 kernel: audit: type=1104 audit(1752530437.802:623): pid=6165 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:00:37.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.75:22-10.0.0.1:53550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'