Jul 11 00:21:02.745017 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 11 00:21:02.745036 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu Jul 10 23:22:35 -00 2025 Jul 11 00:21:02.745044 kernel: efi: EFI v2.70 by EDK II Jul 11 00:21:02.745050 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 11 00:21:02.745055 kernel: random: crng init done Jul 11 00:21:02.745060 kernel: ACPI: Early table checksum verification disabled Jul 11 00:21:02.745067 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 11 00:21:02.745074 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 11 00:21:02.745080 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:02.745085 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:02.745091 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:02.745097 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:02.745102 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:02.745108 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:02.745144 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:02.745153 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:02.745159 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:02.745165 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 11 00:21:02.745172 kernel: NUMA: Failed to initialise from firmware Jul 11 00:21:02.745178 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:21:02.745184 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] Jul 11 00:21:02.745190 kernel: Zone ranges: Jul 11 00:21:02.745195 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:21:02.745203 kernel: DMA32 empty Jul 11 00:21:02.745208 kernel: Normal empty Jul 11 00:21:02.745214 kernel: Movable zone start for each node Jul 11 00:21:02.745219 kernel: Early memory node ranges Jul 11 00:21:02.745225 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 11 00:21:02.745231 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 11 00:21:02.745237 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 11 00:21:02.745243 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 11 00:21:02.745252 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 11 00:21:02.745258 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 11 00:21:02.745263 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 11 00:21:02.745269 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:21:02.745276 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 11 00:21:02.745282 kernel: psci: probing for conduit method from ACPI. Jul 11 00:21:02.745287 kernel: psci: PSCIv1.1 detected in firmware. Jul 11 00:21:02.745293 kernel: psci: Using standard PSCI v0.2 function IDs Jul 11 00:21:02.745298 kernel: psci: Trusted OS migration not required Jul 11 00:21:02.745307 kernel: psci: SMC Calling Convention v1.1 Jul 11 00:21:02.745313 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 11 00:21:02.745320 kernel: ACPI: SRAT not present Jul 11 00:21:02.745326 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 11 00:21:02.745332 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 11 00:21:02.745338 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 11 00:21:02.745344 kernel: Detected PIPT I-cache on CPU0 Jul 11 00:21:02.745350 kernel: CPU features: detected: GIC system register CPU interface Jul 11 00:21:02.745357 kernel: CPU features: detected: Hardware dirty bit management Jul 11 00:21:02.745363 kernel: CPU features: detected: Spectre-v4 Jul 11 00:21:02.745369 kernel: CPU features: detected: Spectre-BHB Jul 11 00:21:02.745377 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 11 00:21:02.745383 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 11 00:21:02.745389 kernel: CPU features: detected: ARM erratum 1418040 Jul 11 00:21:02.745395 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 11 00:21:02.745402 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 11 00:21:02.745408 kernel: Policy zone: DMA Jul 11 00:21:02.745415 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8fd3ef416118421b63f30b3d02e5d4feea39e34704e91050cdad11fae31df42c Jul 11 00:21:02.745421 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:21:02.745428 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:21:02.745434 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:21:02.745440 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:21:02.745448 kernel: Memory: 2457336K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114952K reserved, 0K cma-reserved) Jul 11 00:21:02.745454 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:21:02.745460 kernel: trace event string verifier disabled Jul 11 00:21:02.745466 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:21:02.745472 kernel: rcu: RCU event tracing is enabled. Jul 11 00:21:02.745479 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:21:02.745485 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:21:02.745492 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:21:02.745498 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:21:02.745504 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:21:02.745510 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 11 00:21:02.745518 kernel: GICv3: 256 SPIs implemented Jul 11 00:21:02.745524 kernel: GICv3: 0 Extended SPIs implemented Jul 11 00:21:02.745530 kernel: GICv3: Distributor has no Range Selector support Jul 11 00:21:02.745536 kernel: Root IRQ handler: gic_handle_irq Jul 11 00:21:02.745542 kernel: GICv3: 16 PPIs implemented Jul 11 00:21:02.745549 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 11 00:21:02.745555 kernel: ACPI: SRAT not present Jul 11 00:21:02.745560 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 11 00:21:02.745567 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 11 00:21:02.745573 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 11 00:21:02.745579 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 11 00:21:02.745585 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 11 00:21:02.745592 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:21:02.745599 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 11 00:21:02.745605 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 11 00:21:02.745611 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 11 00:21:02.745618 kernel: arm-pv: using stolen time PV Jul 11 00:21:02.745624 kernel: Console: colour dummy device 80x25 Jul 11 00:21:02.745631 kernel: ACPI: Core revision 20210730 Jul 11 00:21:02.745637 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 11 00:21:02.745644 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:21:02.745650 kernel: LSM: Security Framework initializing Jul 11 00:21:02.745657 kernel: SELinux: Initializing. Jul 11 00:21:02.745664 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:21:02.745672 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:21:02.745684 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:21:02.745691 kernel: Platform MSI: ITS@0x8080000 domain created Jul 11 00:21:02.745698 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 11 00:21:02.745704 kernel: Remapping and enabling EFI services. Jul 11 00:21:02.745710 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:21:02.745717 kernel: Detected PIPT I-cache on CPU1 Jul 11 00:21:02.745725 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 11 00:21:02.745731 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 11 00:21:02.745738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:21:02.745744 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 11 00:21:02.745752 kernel: Detected PIPT I-cache on CPU2 Jul 11 00:21:02.745759 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 11 00:21:02.745765 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 11 00:21:02.745772 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:21:02.745778 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 11 00:21:02.745785 kernel: Detected PIPT I-cache on CPU3 Jul 11 00:21:02.745792 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 11 00:21:02.745798 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 11 00:21:02.745805 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:21:02.745811 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 11 00:21:02.745822 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:21:02.745830 kernel: SMP: Total of 4 processors activated. Jul 11 00:21:02.745837 kernel: CPU features: detected: 32-bit EL0 Support Jul 11 00:21:02.745844 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 11 00:21:02.745851 kernel: CPU features: detected: Common not Private translations Jul 11 00:21:02.745857 kernel: CPU features: detected: CRC32 instructions Jul 11 00:21:02.745865 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 11 00:21:02.745871 kernel: CPU features: detected: LSE atomic instructions Jul 11 00:21:02.745879 kernel: CPU features: detected: Privileged Access Never Jul 11 00:21:02.745898 kernel: CPU features: detected: RAS Extension Support Jul 11 00:21:02.745905 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 11 00:21:02.745911 kernel: CPU: All CPU(s) started at EL1 Jul 11 00:21:02.745918 kernel: alternatives: patching kernel code Jul 11 00:21:02.745926 kernel: devtmpfs: initialized Jul 11 00:21:02.745933 kernel: KASLR enabled Jul 11 00:21:02.745940 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:21:02.745946 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:21:02.745953 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:21:02.745960 kernel: SMBIOS 3.0.0 present. Jul 11 00:21:02.745966 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 11 00:21:02.745973 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:21:02.745980 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 11 00:21:02.745988 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 11 00:21:02.745995 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 11 00:21:02.746002 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:21:02.746009 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Jul 11 00:21:02.746015 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:21:02.746022 kernel: cpuidle: using governor menu Jul 11 00:21:02.746028 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 11 00:21:02.746035 kernel: ASID allocator initialised with 32768 entries Jul 11 00:21:02.746042 kernel: ACPI: bus type PCI registered Jul 11 00:21:02.746050 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:21:02.746056 kernel: Serial: AMBA PL011 UART driver Jul 11 00:21:02.746063 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:21:02.746070 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 11 00:21:02.746077 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:21:02.746084 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 11 00:21:02.746090 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 00:21:02.746097 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 11 00:21:02.746104 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:21:02.746112 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:21:02.746119 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:21:02.746126 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 11 00:21:02.746132 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 11 00:21:02.746139 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 11 00:21:02.746146 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:21:02.746152 kernel: ACPI: Interpreter enabled Jul 11 00:21:02.746159 kernel: ACPI: Using GIC for interrupt routing Jul 11 00:21:02.746166 kernel: ACPI: MCFG table detected, 1 entries Jul 11 00:21:02.746174 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 11 00:21:02.746181 kernel: printk: console [ttyAMA0] enabled Jul 11 00:21:02.746188 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:21:02.746323 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:21:02.746391 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 11 00:21:02.746451 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 11 00:21:02.746509 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 11 00:21:02.746570 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 11 00:21:02.746580 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 11 00:21:02.746586 kernel: PCI host bridge to bus 0000:00 Jul 11 00:21:02.746660 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 11 00:21:02.746714 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 11 00:21:02.746766 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 11 00:21:02.746819 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:21:02.746905 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 11 00:21:02.746977 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:21:02.747040 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 11 00:21:02.747101 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 11 00:21:02.747161 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:21:02.747221 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:21:02.747290 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 11 00:21:02.747353 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 11 00:21:02.747409 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 11 00:21:02.747461 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 11 00:21:02.747514 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 11 00:21:02.747523 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 11 00:21:02.747529 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 11 00:21:02.747536 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 11 00:21:02.747545 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 11 00:21:02.747552 kernel: iommu: Default domain type: Translated Jul 11 00:21:02.747559 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 11 00:21:02.747565 kernel: vgaarb: loaded Jul 11 00:21:02.747572 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 11 00:21:02.747579 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 11 00:21:02.747586 kernel: PTP clock support registered Jul 11 00:21:02.747593 kernel: Registered efivars operations Jul 11 00:21:02.747599 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 11 00:21:02.747606 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:21:02.747614 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:21:02.747621 kernel: pnp: PnP ACPI init Jul 11 00:21:02.747689 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 11 00:21:02.747699 kernel: pnp: PnP ACPI: found 1 devices Jul 11 00:21:02.747706 kernel: NET: Registered PF_INET protocol family Jul 11 00:21:02.747713 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:21:02.747720 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:21:02.747727 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:21:02.747735 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:21:02.747742 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 11 00:21:02.747773 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:21:02.747780 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:21:02.747788 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:21:02.747794 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:21:02.747801 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:21:02.747808 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 11 00:21:02.747815 kernel: kvm [1]: HYP mode not available Jul 11 00:21:02.747824 kernel: Initialise system trusted keyrings Jul 11 00:21:02.747831 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:21:02.747838 kernel: Key type asymmetric registered Jul 11 00:21:02.747845 kernel: Asymmetric key parser 'x509' registered Jul 11 00:21:02.747852 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 11 00:21:02.747859 kernel: io scheduler mq-deadline registered Jul 11 00:21:02.747866 kernel: io scheduler kyber registered Jul 11 00:21:02.747872 kernel: io scheduler bfq registered Jul 11 00:21:02.747879 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 11 00:21:02.747930 kernel: ACPI: button: Power Button [PWRB] Jul 11 00:21:02.747937 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 11 00:21:02.748016 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 11 00:21:02.748026 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:21:02.748033 kernel: thunder_xcv, ver 1.0 Jul 11 00:21:02.748039 kernel: thunder_bgx, ver 1.0 Jul 11 00:21:02.748046 kernel: nicpf, ver 1.0 Jul 11 00:21:02.748053 kernel: nicvf, ver 1.0 Jul 11 00:21:02.748122 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 11 00:21:02.748181 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-11T00:21:02 UTC (1752193262) Jul 11 00:21:02.748190 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 11 00:21:02.748197 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:21:02.748204 kernel: Segment Routing with IPv6 Jul 11 00:21:02.748210 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:21:02.748218 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:21:02.748224 kernel: Key type dns_resolver registered Jul 11 00:21:02.748231 kernel: registered taskstats version 1 Jul 11 00:21:02.748240 kernel: Loading compiled-in X.509 certificates Jul 11 00:21:02.748253 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: e29f2f0310c2b60e0457f826e7476605fb3b6ab2' Jul 11 00:21:02.748261 kernel: Key type .fscrypt registered Jul 11 00:21:02.748268 kernel: Key type fscrypt-provisioning registered Jul 11 00:21:02.748275 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:21:02.748281 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:21:02.748288 kernel: ima: No architecture policies found Jul 11 00:21:02.748295 kernel: clk: Disabling unused clocks Jul 11 00:21:02.748302 kernel: Freeing unused kernel memory: 36416K Jul 11 00:21:02.748310 kernel: Run /init as init process Jul 11 00:21:02.748317 kernel: with arguments: Jul 11 00:21:02.748323 kernel: /init Jul 11 00:21:02.748330 kernel: with environment: Jul 11 00:21:02.748336 kernel: HOME=/ Jul 11 00:21:02.748343 kernel: TERM=linux Jul 11 00:21:02.748350 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:21:02.748358 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 11 00:21:02.748368 systemd[1]: Detected virtualization kvm. Jul 11 00:21:02.748375 systemd[1]: Detected architecture arm64. Jul 11 00:21:02.748382 systemd[1]: Running in initrd. Jul 11 00:21:02.748390 systemd[1]: No hostname configured, using default hostname. Jul 11 00:21:02.748399 systemd[1]: Hostname set to . Jul 11 00:21:02.748406 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:21:02.748414 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:21:02.748421 systemd[1]: Started systemd-ask-password-console.path. Jul 11 00:21:02.748429 systemd[1]: Reached target cryptsetup.target. Jul 11 00:21:02.748436 systemd[1]: Reached target paths.target. Jul 11 00:21:02.748443 systemd[1]: Reached target slices.target. Jul 11 00:21:02.748451 systemd[1]: Reached target swap.target. Jul 11 00:21:02.748458 systemd[1]: Reached target timers.target. Jul 11 00:21:02.748465 systemd[1]: Listening on iscsid.socket. Jul 11 00:21:02.748472 systemd[1]: Listening on iscsiuio.socket. Jul 11 00:21:02.748481 systemd[1]: Listening on systemd-journald-audit.socket. Jul 11 00:21:02.748489 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 11 00:21:02.748496 systemd[1]: Listening on systemd-journald.socket. Jul 11 00:21:02.748503 systemd[1]: Listening on systemd-networkd.socket. Jul 11 00:21:02.748510 systemd[1]: Listening on systemd-udevd-control.socket. Jul 11 00:21:02.748518 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 11 00:21:02.748525 systemd[1]: Reached target sockets.target. Jul 11 00:21:02.748532 systemd[1]: Starting kmod-static-nodes.service... Jul 11 00:21:02.748540 systemd[1]: Finished network-cleanup.service. Jul 11 00:21:02.748548 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:21:02.748556 systemd[1]: Starting systemd-journald.service... Jul 11 00:21:02.748563 systemd[1]: Starting systemd-modules-load.service... Jul 11 00:21:02.748570 systemd[1]: Starting systemd-resolved.service... Jul 11 00:21:02.748577 systemd[1]: Starting systemd-vconsole-setup.service... Jul 11 00:21:02.748584 systemd[1]: Finished kmod-static-nodes.service. Jul 11 00:21:02.748593 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:21:02.748601 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 11 00:21:02.748611 systemd[1]: Finished systemd-vconsole-setup.service. Jul 11 00:21:02.748621 systemd[1]: Starting dracut-cmdline-ask.service... Jul 11 00:21:02.748632 systemd-journald[290]: Journal started Jul 11 00:21:02.748671 systemd-journald[290]: Runtime Journal (/run/log/journal/313bfe83dfc043bb8974a16f55389e98) is 6.0M, max 48.7M, 42.6M free. Jul 11 00:21:02.739436 systemd-modules-load[291]: Inserted module 'overlay' Jul 11 00:21:02.751455 systemd[1]: Started systemd-journald.service. Jul 11 00:21:02.751485 kernel: audit: type=1130 audit(1752193262.751:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:02.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:02.754125 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 11 00:21:02.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:02.759904 kernel: audit: type=1130 audit(1752193262.755:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:02.763898 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:21:02.768094 systemd-modules-load[291]: Inserted module 'br_netfilter' Jul 11 00:21:02.769085 kernel: Bridge firewalling registered Jul 11 00:21:02.768227 systemd[1]: Finished dracut-cmdline-ask.service. Jul 11 00:21:02.772927 kernel: audit: type=1130 audit(1752193262.768:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:02.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:02.773715 systemd[1]: Starting dracut-cmdline.service... Jul 11 00:21:02.775461 systemd-resolved[292]: Positive Trust Anchors: Jul 11 00:21:02.775468 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:21:02.775495 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 11 00:21:02.786215 kernel: SCSI subsystem initialized Jul 11 00:21:02.781082 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 11 00:21:02.790332 kernel: audit: type=1130 audit(1752193262.786:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:02.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:02.790385 dracut-cmdline[308]: dracut-dracut-053 Jul 11 00:21:02.781919 systemd[1]: Started systemd-resolved.service. Jul 11 00:21:02.795052 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:21:02.795069 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:21:02.795078 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 11 00:21:02.789910 systemd[1]: Reached target nss-lookup.target. Jul 11 00:21:02.797054 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8fd3ef416118421b63f30b3d02e5d4feea39e34704e91050cdad11fae31df42c Jul 11 00:21:02.801556 systemd-modules-load[291]: Inserted module 'dm_multipath' Jul 11 00:21:02.802579 systemd[1]: Finished systemd-modules-load.service. Jul 11 00:21:02.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:02.806914 kernel: audit: type=1130 audit(1752193262.803:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:02.807025 systemd[1]: Starting systemd-sysctl.service... Jul 11 00:21:02.813619 systemd[1]: Finished systemd-sysctl.service. Jul 11 00:21:02.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:02.817905 kernel: audit: type=1130 audit(1752193262.814:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:02.855957 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:21:02.867905 kernel: iscsi: registered transport (tcp) Jul 11 00:21:02.884915 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:21:02.884952 kernel: QLogic iSCSI HBA Driver Jul 11 00:21:02.917505 systemd[1]: Finished dracut-cmdline.service. Jul 11 00:21:02.919108 systemd[1]: Starting dracut-pre-udev.service... Jul 11 00:21:02.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:02.922933 kernel: audit: type=1130 audit(1752193262.918:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:02.964904 kernel: raid6: neonx8 gen() 13633 MB/s Jul 11 00:21:02.979915 kernel: raid6: neonx8 xor() 10714 MB/s Jul 11 00:21:02.996921 kernel: raid6: neonx4 gen() 13445 MB/s Jul 11 00:21:03.013918 kernel: raid6: neonx4 xor() 11073 MB/s Jul 11 00:21:03.030922 kernel: raid6: neonx2 gen() 13000 MB/s Jul 11 00:21:03.047924 kernel: raid6: neonx2 xor() 10185 MB/s Jul 11 00:21:03.064914 kernel: raid6: neonx1 gen() 10521 MB/s Jul 11 00:21:03.081918 kernel: raid6: neonx1 xor() 8736 MB/s Jul 11 00:21:03.098919 kernel: raid6: int64x8 gen() 6229 MB/s Jul 11 00:21:03.115920 kernel: raid6: int64x8 xor() 3520 MB/s Jul 11 00:21:03.132932 kernel: raid6: int64x4 gen() 7176 MB/s Jul 11 00:21:03.149922 kernel: raid6: int64x4 xor() 3829 MB/s Jul 11 00:21:03.166920 kernel: raid6: int64x2 gen() 6114 MB/s Jul 11 00:21:03.183923 kernel: raid6: int64x2 xor() 3297 MB/s Jul 11 00:21:03.200925 kernel: raid6: int64x1 gen() 4999 MB/s Jul 11 00:21:03.218000 kernel: raid6: int64x1 xor() 2630 MB/s Jul 11 00:21:03.218018 kernel: raid6: using algorithm neonx8 gen() 13633 MB/s Jul 11 00:21:03.218027 kernel: raid6: .... xor() 10714 MB/s, rmw enabled Jul 11 00:21:03.219083 kernel: raid6: using neon recovery algorithm Jul 11 00:21:03.229917 kernel: xor: measuring software checksum speed Jul 11 00:21:03.229947 kernel: 8regs : 16251 MB/sec Jul 11 00:21:03.231038 kernel: 32regs : 18662 MB/sec Jul 11 00:21:03.231048 kernel: arm64_neon : 27757 MB/sec Jul 11 00:21:03.231057 kernel: xor: using function: arm64_neon (27757 MB/sec) Jul 11 00:21:03.286908 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 11 00:21:03.297943 systemd[1]: Finished dracut-pre-udev.service. Jul 11 00:21:03.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:03.299745 systemd[1]: Starting systemd-udevd.service... Jul 11 00:21:03.304394 kernel: audit: type=1130 audit(1752193263.297:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:03.304413 kernel: audit: type=1334 audit(1752193263.298:10): prog-id=7 op=LOAD Jul 11 00:21:03.298000 audit: BPF prog-id=7 op=LOAD Jul 11 00:21:03.298000 audit: BPF prog-id=8 op=LOAD Jul 11 00:21:03.316754 systemd-udevd[490]: Using default interface naming scheme 'v252'. Jul 11 00:21:03.320216 systemd[1]: Started systemd-udevd.service. Jul 11 00:21:03.321754 systemd[1]: Starting dracut-pre-trigger.service... Jul 11 00:21:03.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:03.333667 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Jul 11 00:21:03.361597 systemd[1]: Finished dracut-pre-trigger.service. Jul 11 00:21:03.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:03.363163 systemd[1]: Starting systemd-udev-trigger.service... Jul 11 00:21:03.395661 systemd[1]: Finished systemd-udev-trigger.service. Jul 11 00:21:03.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:03.426329 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:21:03.433526 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:21:03.433540 kernel: GPT:9289727 != 19775487 Jul 11 00:21:03.433557 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:21:03.433566 kernel: GPT:9289727 != 19775487 Jul 11 00:21:03.433573 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:21:03.433581 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:21:03.443910 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (552) Jul 11 00:21:03.445615 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 11 00:21:03.454613 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 11 00:21:03.460732 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 11 00:21:03.466928 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 11 00:21:03.468069 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 11 00:21:03.471061 systemd[1]: Starting disk-uuid.service... Jul 11 00:21:03.476928 disk-uuid[563]: Primary Header is updated. Jul 11 00:21:03.476928 disk-uuid[563]: Secondary Entries is updated. Jul 11 00:21:03.476928 disk-uuid[563]: Secondary Header is updated. Jul 11 00:21:03.480905 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:21:04.502926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:21:04.502974 disk-uuid[564]: The operation has completed successfully. Jul 11 00:21:04.525503 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:21:04.526656 systemd[1]: Finished disk-uuid.service. Jul 11 00:21:04.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:04.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:04.531550 systemd[1]: Starting verity-setup.service... Jul 11 00:21:04.545898 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 11 00:21:04.565446 systemd[1]: Found device dev-mapper-usr.device. Jul 11 00:21:04.567490 systemd[1]: Mounting sysusr-usr.mount... Jul 11 00:21:04.569250 systemd[1]: Finished verity-setup.service. Jul 11 00:21:04.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:04.615902 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 11 00:21:04.616199 systemd[1]: Mounted sysusr-usr.mount. Jul 11 00:21:04.616985 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 11 00:21:04.617620 systemd[1]: Starting ignition-setup.service... Jul 11 00:21:04.619856 systemd[1]: Starting parse-ip-for-networkd.service... Jul 11 00:21:04.626374 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:21:04.626408 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:21:04.626418 kernel: BTRFS info (device vda6): has skinny extents Jul 11 00:21:04.633582 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:21:04.639370 systemd[1]: Finished ignition-setup.service. Jul 11 00:21:04.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:04.640996 systemd[1]: Starting ignition-fetch-offline.service... Jul 11 00:21:04.700280 systemd[1]: Finished parse-ip-for-networkd.service. Jul 11 00:21:04.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:04.701000 audit: BPF prog-id=9 op=LOAD Jul 11 00:21:04.702337 systemd[1]: Starting systemd-networkd.service... Jul 11 00:21:04.714052 ignition[649]: Ignition 2.14.0 Jul 11 00:21:04.714061 ignition[649]: Stage: fetch-offline Jul 11 00:21:04.714107 ignition[649]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:21:04.714116 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:21:04.714250 ignition[649]: parsed url from cmdline: "" Jul 11 00:21:04.714254 ignition[649]: no config URL provided Jul 11 00:21:04.714258 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:21:04.714265 ignition[649]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:21:04.714283 ignition[649]: op(1): [started] loading QEMU firmware config module Jul 11 00:21:04.714289 ignition[649]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:21:04.727621 ignition[649]: op(1): [finished] loading QEMU firmware config module Jul 11 00:21:04.731782 systemd-networkd[740]: lo: Link UP Jul 11 00:21:04.732565 systemd-networkd[740]: lo: Gained carrier Jul 11 00:21:04.733945 systemd-networkd[740]: Enumeration completed Jul 11 00:21:04.734794 systemd[1]: Started systemd-networkd.service. Jul 11 00:21:04.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:04.735769 systemd[1]: Reached target network.target. Jul 11 00:21:04.737759 systemd[1]: Starting iscsiuio.service... Jul 11 00:21:04.739636 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:21:04.742678 systemd-networkd[740]: eth0: Link UP Jul 11 00:21:04.742683 systemd-networkd[740]: eth0: Gained carrier Jul 11 00:21:04.746586 systemd[1]: Started iscsiuio.service. Jul 11 00:21:04.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:04.748071 systemd[1]: Starting iscsid.service... Jul 11 00:21:04.751203 iscsid[747]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 11 00:21:04.751203 iscsid[747]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 11 00:21:04.751203 iscsid[747]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 11 00:21:04.751203 iscsid[747]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 11 00:21:04.751203 iscsid[747]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 11 00:21:04.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:04.763191 iscsid[747]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 11 00:21:04.753968 systemd[1]: Started iscsid.service. Jul 11 00:21:04.762466 systemd[1]: Starting dracut-initqueue.service... Jul 11 00:21:04.766973 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:21:04.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:04.772848 systemd[1]: Finished dracut-initqueue.service. Jul 11 00:21:04.773849 systemd[1]: Reached target remote-fs-pre.target. Jul 11 00:21:04.774781 systemd[1]: Reached target remote-cryptsetup.target. Jul 11 00:21:04.776355 systemd[1]: Reached target remote-fs.target. Jul 11 00:21:04.778428 systemd[1]: Starting dracut-pre-mount.service... Jul 11 00:21:04.785129 systemd[1]: Finished dracut-pre-mount.service. Jul 11 00:21:04.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:04.794023 ignition[649]: parsing config with SHA512: fd2e9d632f26cfce9b647d5daf9654f13c34f2d7d960a23c23feccbca6774693a7263d80af6c57fbc3d32461ee7a1d0bb3cf932416ffabbde10c4fef6dea5bf9 Jul 11 00:21:04.800842 unknown[649]: fetched base config from "system" Jul 11 00:21:04.800852 unknown[649]: fetched user config from "qemu" Jul 11 00:21:04.801310 ignition[649]: fetch-offline: fetch-offline passed Jul 11 00:21:04.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:04.802517 systemd[1]: Finished ignition-fetch-offline.service. Jul 11 00:21:04.801361 ignition[649]: Ignition finished successfully Jul 11 00:21:04.803954 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:21:04.804585 systemd[1]: Starting ignition-kargs.service... Jul 11 00:21:04.812822 ignition[761]: Ignition 2.14.0 Jul 11 00:21:04.812831 ignition[761]: Stage: kargs Jul 11 00:21:04.812949 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:21:04.814967 systemd[1]: Finished ignition-kargs.service. Jul 11 00:21:04.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:04.812959 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:21:04.813843 ignition[761]: kargs: kargs passed Jul 11 00:21:04.817077 systemd[1]: Starting ignition-disks.service... Jul 11 00:21:04.813880 ignition[761]: Ignition finished successfully Jul 11 00:21:04.823101 ignition[767]: Ignition 2.14.0 Jul 11 00:21:04.823110 ignition[767]: Stage: disks Jul 11 00:21:04.823193 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:21:04.823202 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:21:04.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:04.824746 systemd[1]: Finished ignition-disks.service. Jul 11 00:21:04.824022 ignition[767]: disks: disks passed Jul 11 00:21:04.825969 systemd[1]: Reached target initrd-root-device.target. Jul 11 00:21:04.824057 ignition[767]: Ignition finished successfully Jul 11 00:21:04.827563 systemd[1]: Reached target local-fs-pre.target. Jul 11 00:21:04.828832 systemd[1]: Reached target local-fs.target. Jul 11 00:21:04.829984 systemd[1]: Reached target sysinit.target. Jul 11 00:21:04.831298 systemd[1]: Reached target basic.target. Jul 11 00:21:04.833294 systemd[1]: Starting systemd-fsck-root.service... Jul 11 00:21:04.843220 systemd-fsck[775]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 11 00:21:04.846865 systemd[1]: Finished systemd-fsck-root.service. Jul 11 00:21:04.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:04.848961 systemd[1]: Mounting sysroot.mount... Jul 11 00:21:04.855520 systemd[1]: Mounted sysroot.mount. Jul 11 00:21:04.856682 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 11 00:21:04.856251 systemd[1]: Reached target initrd-root-fs.target. Jul 11 00:21:04.860636 systemd[1]: Mounting sysroot-usr.mount... Jul 11 00:21:04.861500 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 11 00:21:04.861535 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:21:04.861557 systemd[1]: Reached target ignition-diskful.target. Jul 11 00:21:04.863334 systemd[1]: Mounted sysroot-usr.mount. Jul 11 00:21:04.865135 systemd[1]: Starting initrd-setup-root.service... Jul 11 00:21:04.868976 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:21:04.872172 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:21:04.875875 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:21:04.879519 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:21:04.903752 systemd[1]: Finished initrd-setup-root.service. Jul 11 00:21:04.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:04.905261 systemd[1]: Starting ignition-mount.service... Jul 11 00:21:04.906522 systemd[1]: Starting sysroot-boot.service... Jul 11 00:21:04.910493 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Jul 11 00:21:04.918047 ignition[828]: INFO : Ignition 2.14.0 Jul 11 00:21:04.918862 ignition[828]: INFO : Stage: mount Jul 11 00:21:04.919617 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:21:04.920570 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:21:04.922802 ignition[828]: INFO : mount: mount passed Jul 11 00:21:04.923268 systemd[1]: Finished sysroot-boot.service. Jul 11 00:21:04.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:04.924915 ignition[828]: INFO : Ignition finished successfully Jul 11 00:21:04.925532 systemd[1]: Finished ignition-mount.service. Jul 11 00:21:04.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:05.576076 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 11 00:21:05.581900 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (836) Jul 11 00:21:05.584463 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:21:05.584493 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:21:05.584511 kernel: BTRFS info (device vda6): has skinny extents Jul 11 00:21:05.587187 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 11 00:21:05.588789 systemd[1]: Starting ignition-files.service... Jul 11 00:21:05.602022 ignition[856]: INFO : Ignition 2.14.0 Jul 11 00:21:05.602022 ignition[856]: INFO : Stage: files Jul 11 00:21:05.603509 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:21:05.603509 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:21:05.603509 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:21:05.609321 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:21:05.609321 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:21:05.612879 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:21:05.614135 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:21:05.615496 unknown[856]: wrote ssh authorized keys file for user: core Jul 11 00:21:05.616597 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:21:05.616597 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 11 00:21:05.616597 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 11 00:21:05.616597 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 11 00:21:05.616597 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 11 00:21:05.678836 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 11 00:21:05.850017 systemd-networkd[740]: eth0: Gained IPv6LL Jul 11 00:21:05.876357 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 11 00:21:05.878254 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:21:05.878254 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:21:05.878254 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:21:05.878254 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:21:05.878254 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:21:05.878254 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:21:05.878254 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:21:05.878254 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:21:05.878254 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:21:05.894563 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:21:05.894563 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:21:05.894563 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:21:05.894563 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:21:05.894563 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 11 00:21:06.424040 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 11 00:21:06.792208 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:21:06.792208 ignition[856]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 11 00:21:06.795975 ignition[856]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 11 00:21:06.795975 ignition[856]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 11 00:21:06.795975 ignition[856]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 11 00:21:06.795975 ignition[856]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 11 00:21:06.795975 ignition[856]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:21:06.795975 ignition[856]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:21:06.795975 ignition[856]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 11 00:21:06.795975 ignition[856]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 11 00:21:06.795975 ignition[856]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:21:06.795975 ignition[856]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:21:06.795975 ignition[856]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 11 00:21:06.795975 ignition[856]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:21:06.795975 ignition[856]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:21:06.795975 ignition[856]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:21:06.795975 ignition[856]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:21:06.850443 ignition[856]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:21:06.852231 ignition[856]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:21:06.852231 ignition[856]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:21:06.852231 ignition[856]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:21:06.852231 ignition[856]: INFO : files: files passed Jul 11 00:21:06.852231 ignition[856]: INFO : Ignition finished successfully Jul 11 00:21:06.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.851855 systemd[1]: Finished ignition-files.service. Jul 11 00:21:06.853874 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 11 00:21:06.855247 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 11 00:21:06.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.865190 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 11 00:21:06.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.855926 systemd[1]: Starting ignition-quench.service... Jul 11 00:21:06.868622 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:21:06.862121 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:21:06.862208 systemd[1]: Finished ignition-quench.service. Jul 11 00:21:06.863696 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 11 00:21:06.866075 systemd[1]: Reached target ignition-complete.target. Jul 11 00:21:06.868583 systemd[1]: Starting initrd-parse-etc.service... Jul 11 00:21:06.882807 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:21:06.882902 systemd[1]: Finished initrd-parse-etc.service. Jul 11 00:21:06.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.884767 systemd[1]: Reached target initrd-fs.target. Jul 11 00:21:06.885947 systemd[1]: Reached target initrd.target. Jul 11 00:21:06.887265 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 11 00:21:06.887968 systemd[1]: Starting dracut-pre-pivot.service... Jul 11 00:21:06.898995 systemd[1]: Finished dracut-pre-pivot.service. Jul 11 00:21:06.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.900731 systemd[1]: Starting initrd-cleanup.service... Jul 11 00:21:06.909436 systemd[1]: Stopped target nss-lookup.target. Jul 11 00:21:06.910309 systemd[1]: Stopped target remote-cryptsetup.target. Jul 11 00:21:06.911756 systemd[1]: Stopped target timers.target. Jul 11 00:21:06.913135 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:21:06.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.913252 systemd[1]: Stopped dracut-pre-pivot.service. Jul 11 00:21:06.914534 systemd[1]: Stopped target initrd.target. Jul 11 00:21:06.916966 systemd[1]: Stopped target basic.target. Jul 11 00:21:06.918217 systemd[1]: Stopped target ignition-complete.target. Jul 11 00:21:06.919525 systemd[1]: Stopped target ignition-diskful.target. Jul 11 00:21:06.920821 systemd[1]: Stopped target initrd-root-device.target. Jul 11 00:21:06.922282 systemd[1]: Stopped target remote-fs.target. Jul 11 00:21:06.923602 systemd[1]: Stopped target remote-fs-pre.target. Jul 11 00:21:06.925010 systemd[1]: Stopped target sysinit.target. Jul 11 00:21:06.926279 systemd[1]: Stopped target local-fs.target. Jul 11 00:21:06.927569 systemd[1]: Stopped target local-fs-pre.target. Jul 11 00:21:06.928848 systemd[1]: Stopped target swap.target. Jul 11 00:21:06.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.930059 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:21:06.930170 systemd[1]: Stopped dracut-pre-mount.service. Jul 11 00:21:06.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.931469 systemd[1]: Stopped target cryptsetup.target. Jul 11 00:21:06.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.932652 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:21:06.932757 systemd[1]: Stopped dracut-initqueue.service. Jul 11 00:21:06.934325 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:21:06.934414 systemd[1]: Stopped ignition-fetch-offline.service. Jul 11 00:21:06.935700 systemd[1]: Stopped target paths.target. Jul 11 00:21:06.936843 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:21:06.941951 systemd[1]: Stopped systemd-ask-password-console.path. Jul 11 00:21:06.942881 systemd[1]: Stopped target slices.target. Jul 11 00:21:06.944256 systemd[1]: Stopped target sockets.target. Jul 11 00:21:06.945499 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:21:06.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.945608 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 11 00:21:06.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.946977 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:21:06.947074 systemd[1]: Stopped ignition-files.service. Jul 11 00:21:06.951458 systemd[1]: Stopping ignition-mount.service... Jul 11 00:21:06.952330 systemd[1]: Stopping iscsid.service... Jul 11 00:21:06.953906 iscsid[747]: iscsid shutting down. Jul 11 00:21:06.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.953335 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:21:06.953450 systemd[1]: Stopped kmod-static-nodes.service. Jul 11 00:21:06.955547 systemd[1]: Stopping sysroot-boot.service... Jul 11 00:21:06.956228 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:21:06.956375 systemd[1]: Stopped systemd-udev-trigger.service. Jul 11 00:21:06.960721 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:21:06.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.960866 systemd[1]: Stopped dracut-pre-trigger.service. Jul 11 00:21:06.964012 ignition[897]: INFO : Ignition 2.14.0 Jul 11 00:21:06.964012 ignition[897]: INFO : Stage: umount Jul 11 00:21:06.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.967694 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:21:06.967694 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:21:06.967694 ignition[897]: INFO : umount: umount passed Jul 11 00:21:06.967694 ignition[897]: INFO : Ignition finished successfully Jul 11 00:21:06.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.964440 systemd[1]: iscsid.service: Deactivated successfully. Jul 11 00:21:06.964545 systemd[1]: Stopped iscsid.service. Jul 11 00:21:06.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.968249 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:21:06.969536 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:21:06.969629 systemd[1]: Stopped ignition-mount.service. Jul 11 00:21:06.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.972449 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:21:06.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.972530 systemd[1]: Finished initrd-cleanup.service. Jul 11 00:21:06.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.974514 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:21:06.974547 systemd[1]: Closed iscsid.socket. Jul 11 00:21:06.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:06.975741 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:21:06.975785 systemd[1]: Stopped ignition-disks.service. Jul 11 00:21:06.977263 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:21:06.977303 systemd[1]: Stopped ignition-kargs.service. Jul 11 00:21:06.978632 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:21:06.978670 systemd[1]: Stopped ignition-setup.service. Jul 11 00:21:06.980709 systemd[1]: Stopping iscsiuio.service... Jul 11 00:21:06.982183 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 11 00:21:06.982284 systemd[1]: Stopped iscsiuio.service. Jul 11 00:21:06.983248 systemd[1]: Stopped target network.target. Jul 11 00:21:06.984459 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:21:06.984490 systemd[1]: Closed iscsiuio.socket. Jul 11 00:21:06.986276 systemd[1]: Stopping systemd-networkd.service... Jul 11 00:21:06.987854 systemd[1]: Stopping systemd-resolved.service... Jul 11 00:21:06.998953 systemd-networkd[740]: eth0: DHCPv6 lease lost Jul 11 00:21:07.000078 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:21:07.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.000177 systemd[1]: Stopped systemd-networkd.service. Jul 11 00:21:07.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.002221 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:21:07.005000 audit: BPF prog-id=9 op=UNLOAD Jul 11 00:21:07.002325 systemd[1]: Stopped systemd-resolved.service. Jul 11 00:21:07.004322 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:21:07.008000 audit: BPF prog-id=6 op=UNLOAD Jul 11 00:21:07.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.004350 systemd[1]: Closed systemd-networkd.socket. Jul 11 00:21:07.013859 kernel: kauditd_printk_skb: 51 callbacks suppressed Jul 11 00:21:07.013892 kernel: audit: type=1131 audit(1752193267.010:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.006193 systemd[1]: Stopping network-cleanup.service... Jul 11 00:21:07.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.007417 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:21:07.022178 kernel: audit: type=1131 audit(1752193267.013:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.007481 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 11 00:21:07.028094 kernel: audit: type=1131 audit(1752193267.020:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.028114 kernel: audit: type=1131 audit(1752193267.021:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.008868 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:21:07.008922 systemd[1]: Stopped systemd-sysctl.service. Jul 11 00:21:07.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.013907 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:21:07.036437 kernel: audit: type=1131 audit(1752193267.028:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.036457 kernel: audit: type=1131 audit(1752193267.032:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.013947 systemd[1]: Stopped systemd-modules-load.service. Jul 11 00:21:07.014840 systemd[1]: Stopping systemd-udevd.service... Jul 11 00:21:07.019777 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 00:21:07.047308 kernel: audit: type=1131 audit(1752193267.039:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.020355 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:21:07.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.020441 systemd[1]: Stopped sysroot-boot.service. Jul 11 00:21:07.059030 kernel: audit: type=1131 audit(1752193267.048:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.059062 kernel: audit: type=1131 audit(1752193267.052:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.021860 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:21:07.021968 systemd[1]: Stopped initrd-setup-root.service. Jul 11 00:21:07.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.028351 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:21:07.066327 kernel: audit: type=1131 audit(1752193267.061:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:07.028450 systemd[1]: Stopped network-cleanup.service. Jul 11 00:21:07.030174 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:21:07.030302 systemd[1]: Stopped systemd-udevd.service. Jul 11 00:21:07.033678 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:21:07.033712 systemd[1]: Closed systemd-udevd-control.socket. Jul 11 00:21:07.037201 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:21:07.037230 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 11 00:21:07.038586 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:21:07.038629 systemd[1]: Stopped dracut-pre-udev.service. Jul 11 00:21:07.039914 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:21:07.039952 systemd[1]: Stopped dracut-cmdline.service. Jul 11 00:21:07.048412 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:21:07.048453 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 11 00:21:07.052912 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 11 00:21:07.059835 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:21:07.059915 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 11 00:21:07.061775 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:21:07.080000 audit: BPF prog-id=5 op=UNLOAD Jul 11 00:21:07.080000 audit: BPF prog-id=4 op=UNLOAD Jul 11 00:21:07.080000 audit: BPF prog-id=3 op=UNLOAD Jul 11 00:21:07.080000 audit: BPF prog-id=8 op=UNLOAD Jul 11 00:21:07.080000 audit: BPF prog-id=7 op=UNLOAD Jul 11 00:21:07.061857 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 11 00:21:07.065499 systemd[1]: Reached target initrd-switch-root.target. Jul 11 00:21:07.071022 systemd[1]: Starting initrd-switch-root.service... Jul 11 00:21:07.079047 systemd[1]: Switching root. Jul 11 00:21:07.091256 systemd-journald[290]: Journal stopped Jul 11 00:21:09.135805 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Jul 11 00:21:09.135861 kernel: SELinux: Class mctp_socket not defined in policy. Jul 11 00:21:09.135874 kernel: SELinux: Class anon_inode not defined in policy. Jul 11 00:21:09.135905 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 11 00:21:09.135916 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:21:09.135925 kernel: SELinux: policy capability open_perms=1 Jul 11 00:21:09.135939 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:21:09.135948 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:21:09.135958 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:21:09.135967 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:21:09.135976 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:21:09.135986 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:21:09.135996 systemd[1]: Successfully loaded SELinux policy in 32.571ms. Jul 11 00:21:09.136017 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.372ms. Jul 11 00:21:09.136029 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 11 00:21:09.136040 systemd[1]: Detected virtualization kvm. Jul 11 00:21:09.136050 systemd[1]: Detected architecture arm64. Jul 11 00:21:09.136066 systemd[1]: Detected first boot. Jul 11 00:21:09.136076 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:21:09.136086 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 11 00:21:09.136096 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:21:09.136107 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 11 00:21:09.136118 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 11 00:21:09.136130 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:21:09.136143 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:21:09.136154 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 11 00:21:09.136164 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 11 00:21:09.136174 systemd[1]: Created slice system-addon\x2drun.slice. Jul 11 00:21:09.136185 systemd[1]: Created slice system-getty.slice. Jul 11 00:21:09.136195 systemd[1]: Created slice system-modprobe.slice. Jul 11 00:21:09.136206 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 11 00:21:09.136218 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 11 00:21:09.136237 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 11 00:21:09.136249 systemd[1]: Created slice user.slice. Jul 11 00:21:09.136260 systemd[1]: Started systemd-ask-password-console.path. Jul 11 00:21:09.136271 systemd[1]: Started systemd-ask-password-wall.path. Jul 11 00:21:09.136281 systemd[1]: Set up automount boot.automount. Jul 11 00:21:09.136292 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 11 00:21:09.136302 systemd[1]: Reached target integritysetup.target. Jul 11 00:21:09.136313 systemd[1]: Reached target remote-cryptsetup.target. Jul 11 00:21:09.136326 systemd[1]: Reached target remote-fs.target. Jul 11 00:21:09.136336 systemd[1]: Reached target slices.target. Jul 11 00:21:09.136346 systemd[1]: Reached target swap.target. Jul 11 00:21:09.136357 systemd[1]: Reached target torcx.target. Jul 11 00:21:09.136367 systemd[1]: Reached target veritysetup.target. Jul 11 00:21:09.136377 systemd[1]: Listening on systemd-coredump.socket. Jul 11 00:21:09.136388 systemd[1]: Listening on systemd-initctl.socket. Jul 11 00:21:09.136399 systemd[1]: Listening on systemd-journald-audit.socket. Jul 11 00:21:09.136412 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 11 00:21:09.136422 systemd[1]: Listening on systemd-journald.socket. Jul 11 00:21:09.136432 systemd[1]: Listening on systemd-networkd.socket. Jul 11 00:21:09.136443 systemd[1]: Listening on systemd-udevd-control.socket. Jul 11 00:21:09.136453 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 11 00:21:09.136465 systemd[1]: Listening on systemd-userdbd.socket. Jul 11 00:21:09.136475 systemd[1]: Mounting dev-hugepages.mount... Jul 11 00:21:09.136486 systemd[1]: Mounting dev-mqueue.mount... Jul 11 00:21:09.136496 systemd[1]: Mounting media.mount... Jul 11 00:21:09.136506 systemd[1]: Mounting sys-kernel-debug.mount... Jul 11 00:21:09.136518 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 11 00:21:09.136529 systemd[1]: Mounting tmp.mount... Jul 11 00:21:09.136539 systemd[1]: Starting flatcar-tmpfiles.service... Jul 11 00:21:09.136549 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:21:09.136560 systemd[1]: Starting kmod-static-nodes.service... Jul 11 00:21:09.136570 systemd[1]: Starting modprobe@configfs.service... Jul 11 00:21:09.136581 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:21:09.136592 systemd[1]: Starting modprobe@drm.service... Jul 11 00:21:09.136602 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:21:09.136613 systemd[1]: Starting modprobe@fuse.service... Jul 11 00:21:09.136624 systemd[1]: Starting modprobe@loop.service... Jul 11 00:21:09.136638 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:21:09.136653 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 11 00:21:09.136663 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 11 00:21:09.136673 systemd[1]: Starting systemd-journald.service... Jul 11 00:21:09.136683 systemd[1]: Starting systemd-modules-load.service... Jul 11 00:21:09.136694 kernel: fuse: init (API version 7.34) Jul 11 00:21:09.136706 systemd[1]: Starting systemd-network-generator.service... Jul 11 00:21:09.136718 kernel: loop: module loaded Jul 11 00:21:09.136727 systemd[1]: Starting systemd-remount-fs.service... Jul 11 00:21:09.136739 systemd[1]: Starting systemd-udev-trigger.service... Jul 11 00:21:09.136749 systemd[1]: Mounted dev-hugepages.mount. Jul 11 00:21:09.136760 systemd[1]: Mounted dev-mqueue.mount. Jul 11 00:21:09.136770 systemd[1]: Mounted media.mount. Jul 11 00:21:09.136780 systemd[1]: Mounted sys-kernel-debug.mount. Jul 11 00:21:09.136790 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 11 00:21:09.136801 systemd[1]: Mounted tmp.mount. Jul 11 00:21:09.136811 systemd[1]: Finished kmod-static-nodes.service. Jul 11 00:21:09.136822 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:21:09.136832 systemd[1]: Finished modprobe@configfs.service. Jul 11 00:21:09.136844 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:21:09.136854 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:21:09.136864 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:21:09.136874 systemd[1]: Finished modprobe@drm.service. Jul 11 00:21:09.136898 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:21:09.136909 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:21:09.136920 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:21:09.136930 systemd[1]: Finished modprobe@fuse.service. Jul 11 00:21:09.136942 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:21:09.136952 systemd[1]: Finished modprobe@loop.service. Jul 11 00:21:09.136963 systemd[1]: Finished flatcar-tmpfiles.service. Jul 11 00:21:09.136974 systemd[1]: Finished systemd-modules-load.service. Jul 11 00:21:09.136985 systemd[1]: Finished systemd-network-generator.service. Jul 11 00:21:09.137000 systemd-journald[1032]: Journal started Jul 11 00:21:09.137043 systemd-journald[1032]: Runtime Journal (/run/log/journal/313bfe83dfc043bb8974a16f55389e98) is 6.0M, max 48.7M, 42.6M free. Jul 11 00:21:09.029000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 11 00:21:09.029000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 11 00:21:09.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.133000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 11 00:21:09.133000 audit[1032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff5444930 a2=4000 a3=1 items=0 ppid=1 pid=1032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:09.133000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 11 00:21:09.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.140328 systemd[1]: Started systemd-journald.service. Jul 11 00:21:09.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.141448 systemd[1]: Finished systemd-remount-fs.service. Jul 11 00:21:09.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.142784 systemd[1]: Reached target network-pre.target. Jul 11 00:21:09.144849 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 11 00:21:09.146995 systemd[1]: Mounting sys-kernel-config.mount... Jul 11 00:21:09.147776 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:21:09.149538 systemd[1]: Starting systemd-hwdb-update.service... Jul 11 00:21:09.151746 systemd[1]: Starting systemd-journal-flush.service... Jul 11 00:21:09.152732 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:21:09.153906 systemd[1]: Starting systemd-random-seed.service... Jul 11 00:21:09.154826 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:21:09.156282 systemd[1]: Starting systemd-sysctl.service... Jul 11 00:21:09.158522 systemd[1]: Starting systemd-sysusers.service... Jul 11 00:21:09.162844 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 11 00:21:09.163933 systemd[1]: Mounted sys-kernel-config.mount. Jul 11 00:21:09.169963 systemd-journald[1032]: Time spent on flushing to /var/log/journal/313bfe83dfc043bb8974a16f55389e98 is 11.497ms for 933 entries. Jul 11 00:21:09.169963 systemd-journald[1032]: System Journal (/var/log/journal/313bfe83dfc043bb8974a16f55389e98) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:21:09.193755 systemd-journald[1032]: Received client request to flush runtime journal. Jul 11 00:21:09.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.174406 systemd[1]: Finished systemd-random-seed.service. Jul 11 00:21:09.194874 udevadm[1081]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 11 00:21:09.175456 systemd[1]: Reached target first-boot-complete.target. Jul 11 00:21:09.178122 systemd[1]: Finished systemd-udev-trigger.service. Jul 11 00:21:09.180727 systemd[1]: Starting systemd-udev-settle.service... Jul 11 00:21:09.190433 systemd[1]: Finished systemd-sysusers.service. Jul 11 00:21:09.191837 systemd[1]: Finished systemd-sysctl.service. Jul 11 00:21:09.194043 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 11 00:21:09.195703 systemd[1]: Finished systemd-journal-flush.service. Jul 11 00:21:09.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.213960 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 11 00:21:09.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.537550 systemd[1]: Finished systemd-hwdb-update.service. Jul 11 00:21:09.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.539724 systemd[1]: Starting systemd-udevd.service... Jul 11 00:21:09.561459 systemd-udevd[1090]: Using default interface naming scheme 'v252'. Jul 11 00:21:09.572766 systemd[1]: Started systemd-udevd.service. Jul 11 00:21:09.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.575439 systemd[1]: Starting systemd-networkd.service... Jul 11 00:21:09.586132 systemd[1]: Starting systemd-userdbd.service... Jul 11 00:21:09.611823 systemd[1]: Found device dev-ttyAMA0.device. Jul 11 00:21:09.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.619080 systemd[1]: Started systemd-userdbd.service. Jul 11 00:21:09.673424 systemd-networkd[1099]: lo: Link UP Jul 11 00:21:09.673435 systemd-networkd[1099]: lo: Gained carrier Jul 11 00:21:09.673780 systemd-networkd[1099]: Enumeration completed Jul 11 00:21:09.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.673923 systemd[1]: Started systemd-networkd.service. Jul 11 00:21:09.674007 systemd-networkd[1099]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:21:09.675586 systemd-networkd[1099]: eth0: Link UP Jul 11 00:21:09.675589 systemd-networkd[1099]: eth0: Gained carrier Jul 11 00:21:09.677211 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 11 00:21:09.692044 systemd-networkd[1099]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:21:09.693405 systemd[1]: Finished systemd-udev-settle.service. Jul 11 00:21:09.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.695783 systemd[1]: Starting lvm2-activation-early.service... Jul 11 00:21:09.709843 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:21:09.736607 systemd[1]: Finished lvm2-activation-early.service. Jul 11 00:21:09.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.737696 systemd[1]: Reached target cryptsetup.target. Jul 11 00:21:09.739814 systemd[1]: Starting lvm2-activation.service... Jul 11 00:21:09.743423 lvm[1126]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:21:09.767853 systemd[1]: Finished lvm2-activation.service. Jul 11 00:21:09.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.768851 systemd[1]: Reached target local-fs-pre.target. Jul 11 00:21:09.769768 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:21:09.769801 systemd[1]: Reached target local-fs.target. Jul 11 00:21:09.770622 systemd[1]: Reached target machines.target. Jul 11 00:21:09.772679 systemd[1]: Starting ldconfig.service... Jul 11 00:21:09.773764 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:21:09.773841 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:21:09.775296 systemd[1]: Starting systemd-boot-update.service... Jul 11 00:21:09.777581 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 11 00:21:09.780085 systemd[1]: Starting systemd-machine-id-commit.service... Jul 11 00:21:09.782539 systemd[1]: Starting systemd-sysext.service... Jul 11 00:21:09.785433 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1129 (bootctl) Jul 11 00:21:09.786826 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 11 00:21:09.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.799987 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 11 00:21:09.806681 systemd[1]: Unmounting usr-share-oem.mount... Jul 11 00:21:09.811010 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 11 00:21:09.811262 systemd[1]: Unmounted usr-share-oem.mount. Jul 11 00:21:09.857908 kernel: loop0: detected capacity change from 0 to 203944 Jul 11 00:21:09.860055 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:21:09.861556 systemd[1]: Finished systemd-machine-id-commit.service. Jul 11 00:21:09.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.873720 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:21:09.875803 systemd-fsck[1139]: fsck.fat 4.2 (2021-01-31) Jul 11 00:21:09.875803 systemd-fsck[1139]: /dev/vda1: 236 files, 117310/258078 clusters Jul 11 00:21:09.879109 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 11 00:21:09.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.885472 systemd[1]: Mounting boot.mount... Jul 11 00:21:09.889906 kernel: loop1: detected capacity change from 0 to 203944 Jul 11 00:21:09.893186 systemd[1]: Mounted boot.mount. Jul 11 00:21:09.897120 (sd-sysext)[1148]: Using extensions 'kubernetes'. Jul 11 00:21:09.897475 (sd-sysext)[1148]: Merged extensions into '/usr'. Jul 11 00:21:09.900715 systemd[1]: Finished systemd-boot-update.service. Jul 11 00:21:09.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.915484 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:21:09.917089 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:21:09.919281 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:21:09.921641 systemd[1]: Starting modprobe@loop.service... Jul 11 00:21:09.922667 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:21:09.922817 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:21:09.923803 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:21:09.923979 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:21:09.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.925255 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:21:09.925390 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:21:09.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.926730 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:21:09.926940 systemd[1]: Finished modprobe@loop.service. Jul 11 00:21:09.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:09.928322 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:21:09.928461 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:21:09.970010 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:21:09.973283 systemd[1]: Finished ldconfig.service. Jul 11 00:21:09.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.099678 systemd[1]: Mounting usr-share-oem.mount... Jul 11 00:21:10.105150 systemd[1]: Mounted usr-share-oem.mount. Jul 11 00:21:10.107243 systemd[1]: Finished systemd-sysext.service. Jul 11 00:21:10.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.109456 systemd[1]: Starting ensure-sysext.service... Jul 11 00:21:10.111462 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 11 00:21:10.118966 systemd[1]: Reloading. Jul 11 00:21:10.121436 systemd-tmpfiles[1165]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 11 00:21:10.122195 systemd-tmpfiles[1165]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:21:10.123623 systemd-tmpfiles[1165]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:21:10.156970 /usr/lib/systemd/system-generators/torcx-generator[1185]: time="2025-07-11T00:21:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 11 00:21:10.157001 /usr/lib/systemd/system-generators/torcx-generator[1185]: time="2025-07-11T00:21:10Z" level=info msg="torcx already run" Jul 11 00:21:10.226654 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 11 00:21:10.226676 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 11 00:21:10.242318 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:21:10.289575 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 11 00:21:10.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.293979 systemd[1]: Starting audit-rules.service... Jul 11 00:21:10.296120 systemd[1]: Starting clean-ca-certificates.service... Jul 11 00:21:10.298291 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 11 00:21:10.300974 systemd[1]: Starting systemd-resolved.service... Jul 11 00:21:10.303458 systemd[1]: Starting systemd-timesyncd.service... Jul 11 00:21:10.306650 systemd[1]: Starting systemd-update-utmp.service... Jul 11 00:21:10.308234 systemd[1]: Finished clean-ca-certificates.service. Jul 11 00:21:10.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.311290 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:21:10.313391 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:21:10.314575 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:21:10.314000 audit[1243]: SYSTEM_BOOT pid=1243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.316602 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:21:10.318555 systemd[1]: Starting modprobe@loop.service... Jul 11 00:21:10.319501 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:21:10.319625 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:21:10.319752 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:21:10.320585 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:21:10.320749 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:21:10.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.322614 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:21:10.322757 systemd[1]: Finished modprobe@loop.service. Jul 11 00:21:10.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.324153 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:21:10.324351 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:21:10.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.327311 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:21:10.327483 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:21:10.329509 systemd[1]: Finished systemd-update-utmp.service. Jul 11 00:21:10.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.331110 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 11 00:21:10.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.333585 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:21:10.335522 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:21:10.337590 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:21:10.340145 systemd[1]: Starting modprobe@loop.service... Jul 11 00:21:10.340995 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:21:10.341163 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:21:10.342650 systemd[1]: Starting systemd-update-done.service... Jul 11 00:21:10.343665 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:21:10.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.344710 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:21:10.344879 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:21:10.346427 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:21:10.346560 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:21:10.347967 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:21:10.348166 systemd[1]: Finished modprobe@loop.service. Jul 11 00:21:10.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.349433 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:21:10.349522 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:21:10.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.352900 systemd[1]: Finished systemd-update-done.service. Jul 11 00:21:10.354372 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:21:10.355766 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:21:10.357702 systemd[1]: Starting modprobe@drm.service... Jul 11 00:21:10.360172 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:21:10.362178 systemd[1]: Starting modprobe@loop.service... Jul 11 00:21:10.363046 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:21:10.363170 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:21:10.364621 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 11 00:21:10.369005 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:21:10.370192 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:21:10.370362 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:21:10.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.374286 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:21:10.374430 systemd[1]: Finished modprobe@drm.service. Jul 11 00:21:10.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:10.375790 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:21:10.375947 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:21:10.376000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 11 00:21:10.376000 audit[1273]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc77711b0 a2=420 a3=0 items=0 ppid=1231 pid=1273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:10.376000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 11 00:21:10.377082 augenrules[1273]: No rules Jul 11 00:21:10.377280 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:21:10.377486 systemd[1]: Finished modprobe@loop.service. Jul 11 00:21:10.378760 systemd[1]: Finished audit-rules.service. Jul 11 00:21:10.380298 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:21:10.380404 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:21:10.381659 systemd[1]: Finished ensure-sysext.service. Jul 11 00:21:10.390469 systemd-resolved[1236]: Positive Trust Anchors: Jul 11 00:21:10.390479 systemd-resolved[1236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:21:10.390505 systemd-resolved[1236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 11 00:21:10.404486 systemd[1]: Started systemd-timesyncd.service. Jul 11 00:21:10.405422 systemd-timesyncd[1237]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:21:10.405793 systemd-timesyncd[1237]: Initial clock synchronization to Fri 2025-07-11 00:21:10.388647 UTC. Jul 11 00:21:10.405854 systemd[1]: Reached target time-set.target. Jul 11 00:21:10.405997 systemd-resolved[1236]: Defaulting to hostname 'linux'. Jul 11 00:21:10.410935 systemd[1]: Started systemd-resolved.service. Jul 11 00:21:10.411786 systemd[1]: Reached target network.target. Jul 11 00:21:10.412650 systemd[1]: Reached target nss-lookup.target. Jul 11 00:21:10.413486 systemd[1]: Reached target sysinit.target. Jul 11 00:21:10.414350 systemd[1]: Started motdgen.path. Jul 11 00:21:10.415102 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 11 00:21:10.416371 systemd[1]: Started logrotate.timer. Jul 11 00:21:10.417216 systemd[1]: Started mdadm.timer. Jul 11 00:21:10.417915 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 11 00:21:10.418743 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:21:10.418774 systemd[1]: Reached target paths.target. Jul 11 00:21:10.419541 systemd[1]: Reached target timers.target. Jul 11 00:21:10.420629 systemd[1]: Listening on dbus.socket. Jul 11 00:21:10.422485 systemd[1]: Starting docker.socket... Jul 11 00:21:10.424197 systemd[1]: Listening on sshd.socket. Jul 11 00:21:10.425107 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:21:10.425462 systemd[1]: Listening on docker.socket. Jul 11 00:21:10.426314 systemd[1]: Reached target sockets.target. Jul 11 00:21:10.427154 systemd[1]: Reached target basic.target. Jul 11 00:21:10.428072 systemd[1]: System is tainted: cgroupsv1 Jul 11 00:21:10.428121 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 11 00:21:10.428141 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 11 00:21:10.429196 systemd[1]: Starting containerd.service... Jul 11 00:21:10.431007 systemd[1]: Starting dbus.service... Jul 11 00:21:10.432724 systemd[1]: Starting enable-oem-cloudinit.service... Jul 11 00:21:10.434795 systemd[1]: Starting extend-filesystems.service... Jul 11 00:21:10.435706 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 11 00:21:10.437075 systemd[1]: Starting motdgen.service... Jul 11 00:21:10.438943 systemd[1]: Starting prepare-helm.service... Jul 11 00:21:10.440439 jq[1293]: false Jul 11 00:21:10.440809 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 11 00:21:10.442991 systemd[1]: Starting sshd-keygen.service... Jul 11 00:21:10.446165 systemd[1]: Starting systemd-logind.service... Jul 11 00:21:10.447365 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:21:10.447444 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:21:10.448589 systemd[1]: Starting update-engine.service... Jul 11 00:21:10.453074 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 11 00:21:10.455488 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:21:10.455778 jq[1309]: true Jul 11 00:21:10.455714 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 11 00:21:10.456754 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:21:10.457029 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 11 00:21:10.471959 extend-filesystems[1294]: Found loop1 Jul 11 00:21:10.471959 extend-filesystems[1294]: Found vda Jul 11 00:21:10.471959 extend-filesystems[1294]: Found vda1 Jul 11 00:21:10.471959 extend-filesystems[1294]: Found vda2 Jul 11 00:21:10.471959 extend-filesystems[1294]: Found vda3 Jul 11 00:21:10.471959 extend-filesystems[1294]: Found usr Jul 11 00:21:10.471959 extend-filesystems[1294]: Found vda4 Jul 11 00:21:10.471959 extend-filesystems[1294]: Found vda6 Jul 11 00:21:10.471959 extend-filesystems[1294]: Found vda7 Jul 11 00:21:10.471959 extend-filesystems[1294]: Found vda9 Jul 11 00:21:10.471959 extend-filesystems[1294]: Checking size of /dev/vda9 Jul 11 00:21:10.486058 jq[1315]: true Jul 11 00:21:10.486144 tar[1312]: linux-arm64/helm Jul 11 00:21:10.493219 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:21:10.493474 systemd[1]: Finished motdgen.service. Jul 11 00:21:10.513471 dbus-daemon[1292]: [system] SELinux support is enabled Jul 11 00:21:10.513742 systemd[1]: Started dbus.service. Jul 11 00:21:10.516629 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:21:10.516655 systemd[1]: Reached target system-config.target. Jul 11 00:21:10.517751 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:21:10.517782 systemd[1]: Reached target user-config.target. Jul 11 00:21:10.518517 extend-filesystems[1294]: Resized partition /dev/vda9 Jul 11 00:21:10.528713 update_engine[1307]: I0711 00:21:10.528163 1307 main.cc:92] Flatcar Update Engine starting Jul 11 00:21:10.530716 extend-filesystems[1349]: resize2fs 1.46.5 (30-Dec-2021) Jul 11 00:21:10.532948 update_engine[1307]: I0711 00:21:10.532919 1307 update_check_scheduler.cc:74] Next update check in 11m11s Jul 11 00:21:10.534536 systemd[1]: Started update-engine.service. Jul 11 00:21:10.538637 systemd[1]: Started locksmithd.service. Jul 11 00:21:10.540973 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:21:10.554150 systemd-logind[1302]: Watching system buttons on /dev/input/event0 (Power Button) Jul 11 00:21:10.554386 systemd-logind[1302]: New seat seat0. Jul 11 00:21:10.555791 systemd[1]: Started systemd-logind.service. Jul 11 00:21:10.560426 bash[1346]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:21:10.561214 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 11 00:21:10.563924 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:21:10.573280 extend-filesystems[1349]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:21:10.573280 extend-filesystems[1349]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:21:10.573280 extend-filesystems[1349]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:21:10.577942 extend-filesystems[1294]: Resized filesystem in /dev/vda9 Jul 11 00:21:10.575260 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:21:10.575494 systemd[1]: Finished extend-filesystems.service. Jul 11 00:21:10.592142 env[1316]: time="2025-07-11T00:21:10.592092040Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 11 00:21:10.618330 env[1316]: time="2025-07-11T00:21:10.618234280Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:21:10.618672 env[1316]: time="2025-07-11T00:21:10.618646880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:21:10.620112 env[1316]: time="2025-07-11T00:21:10.620078320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:21:10.620206 env[1316]: time="2025-07-11T00:21:10.620189440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:21:10.620542 env[1316]: time="2025-07-11T00:21:10.620514320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:21:10.620622 env[1316]: time="2025-07-11T00:21:10.620606400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:21:10.620693 env[1316]: time="2025-07-11T00:21:10.620678320Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 11 00:21:10.620751 env[1316]: time="2025-07-11T00:21:10.620738440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:21:10.620906 env[1316]: time="2025-07-11T00:21:10.620867360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:21:10.621210 env[1316]: time="2025-07-11T00:21:10.621183720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:21:10.621486 env[1316]: time="2025-07-11T00:21:10.621458840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:21:10.621561 env[1316]: time="2025-07-11T00:21:10.621545080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:21:10.621687 env[1316]: time="2025-07-11T00:21:10.621667880Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 11 00:21:10.621754 env[1316]: time="2025-07-11T00:21:10.621739280Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:21:10.625170 env[1316]: time="2025-07-11T00:21:10.625141680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:21:10.625301 env[1316]: time="2025-07-11T00:21:10.625282160Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:21:10.625366 env[1316]: time="2025-07-11T00:21:10.625352520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:21:10.625702 env[1316]: time="2025-07-11T00:21:10.625681480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:21:10.625846 env[1316]: time="2025-07-11T00:21:10.625828080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:21:10.626003 env[1316]: time="2025-07-11T00:21:10.625981520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:21:10.626085 env[1316]: time="2025-07-11T00:21:10.626070520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:21:10.626493 env[1316]: time="2025-07-11T00:21:10.626462600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:21:10.626584 env[1316]: time="2025-07-11T00:21:10.626566280Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 11 00:21:10.626707 env[1316]: time="2025-07-11T00:21:10.626688720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:21:10.626773 env[1316]: time="2025-07-11T00:21:10.626758800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:21:10.626837 env[1316]: time="2025-07-11T00:21:10.626819440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:21:10.627033 env[1316]: time="2025-07-11T00:21:10.627010000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:21:10.627339 env[1316]: time="2025-07-11T00:21:10.627316640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:21:10.627655 locksmithd[1351]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:21:10.628108 env[1316]: time="2025-07-11T00:21:10.628082360Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:21:10.628208 env[1316]: time="2025-07-11T00:21:10.628191080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:21:10.628302 env[1316]: time="2025-07-11T00:21:10.628286320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:21:10.628540 env[1316]: time="2025-07-11T00:21:10.628524680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:21:10.628678 env[1316]: time="2025-07-11T00:21:10.628659360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:21:10.628745 env[1316]: time="2025-07-11T00:21:10.628731080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:21:10.628803 env[1316]: time="2025-07-11T00:21:10.628789480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:21:10.645968 env[1316]: time="2025-07-11T00:21:10.645928080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:21:10.645968 env[1316]: time="2025-07-11T00:21:10.645972640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:21:10.646087 env[1316]: time="2025-07-11T00:21:10.645987200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:21:10.646087 env[1316]: time="2025-07-11T00:21:10.646001800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:21:10.646087 env[1316]: time="2025-07-11T00:21:10.646017920Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:21:10.646176 env[1316]: time="2025-07-11T00:21:10.646158600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:21:10.646200 env[1316]: time="2025-07-11T00:21:10.646176560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:21:10.646200 env[1316]: time="2025-07-11T00:21:10.646190560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:21:10.646252 env[1316]: time="2025-07-11T00:21:10.646203080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:21:10.646275 env[1316]: time="2025-07-11T00:21:10.646254080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 11 00:21:10.646275 env[1316]: time="2025-07-11T00:21:10.646270280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:21:10.646320 env[1316]: time="2025-07-11T00:21:10.646287800Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 11 00:21:10.646341 env[1316]: time="2025-07-11T00:21:10.646323880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:21:10.646572 env[1316]: time="2025-07-11T00:21:10.646519440Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:21:10.651196 env[1316]: time="2025-07-11T00:21:10.646582560Z" level=info msg="Connect containerd service" Jul 11 00:21:10.651196 env[1316]: time="2025-07-11T00:21:10.646617680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:21:10.651196 env[1316]: time="2025-07-11T00:21:10.647421880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:21:10.651196 env[1316]: time="2025-07-11T00:21:10.647976640Z" level=info msg="Start subscribing containerd event" Jul 11 00:21:10.651196 env[1316]: time="2025-07-11T00:21:10.648028440Z" level=info msg="Start recovering state" Jul 11 00:21:10.651196 env[1316]: time="2025-07-11T00:21:10.648081280Z" level=info msg="Start event monitor" Jul 11 00:21:10.651196 env[1316]: time="2025-07-11T00:21:10.648099320Z" level=info msg="Start snapshots syncer" Jul 11 00:21:10.651196 env[1316]: time="2025-07-11T00:21:10.648108760Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:21:10.651196 env[1316]: time="2025-07-11T00:21:10.648115880Z" level=info msg="Start streaming server" Jul 11 00:21:10.651196 env[1316]: time="2025-07-11T00:21:10.648340120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:21:10.651196 env[1316]: time="2025-07-11T00:21:10.648379360Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:21:10.651196 env[1316]: time="2025-07-11T00:21:10.648459920Z" level=info msg="containerd successfully booted in 0.061054s" Jul 11 00:21:10.648563 systemd[1]: Started containerd.service. Jul 11 00:21:10.884609 tar[1312]: linux-arm64/LICENSE Jul 11 00:21:10.884727 tar[1312]: linux-arm64/README.md Jul 11 00:21:10.888769 systemd[1]: Finished prepare-helm.service. Jul 11 00:21:11.610052 systemd-networkd[1099]: eth0: Gained IPv6LL Jul 11 00:21:11.611771 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 11 00:21:11.613070 systemd[1]: Reached target network-online.target. Jul 11 00:21:11.615506 systemd[1]: Starting kubelet.service... Jul 11 00:21:12.182111 systemd[1]: Started kubelet.service. Jul 11 00:21:12.621268 kubelet[1378]: E0711 00:21:12.621210 1378 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:21:12.623082 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:21:12.623231 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:21:13.579824 sshd_keygen[1314]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:21:13.597539 systemd[1]: Finished sshd-keygen.service. Jul 11 00:21:13.599917 systemd[1]: Starting issuegen.service... Jul 11 00:21:13.604466 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:21:13.604711 systemd[1]: Finished issuegen.service. Jul 11 00:21:13.606852 systemd[1]: Starting systemd-user-sessions.service... Jul 11 00:21:13.612281 systemd[1]: Finished systemd-user-sessions.service. Jul 11 00:21:13.614499 systemd[1]: Started getty@tty1.service. Jul 11 00:21:13.616427 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 11 00:21:13.617459 systemd[1]: Reached target getty.target. Jul 11 00:21:13.618286 systemd[1]: Reached target multi-user.target. Jul 11 00:21:13.620284 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 11 00:21:13.626497 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 11 00:21:13.626727 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 11 00:21:13.627831 systemd[1]: Startup finished in 5.169s (kernel) + 6.479s (userspace) = 11.648s. Jul 11 00:21:15.551818 systemd[1]: Created slice system-sshd.slice. Jul 11 00:21:15.552992 systemd[1]: Started sshd@0-10.0.0.33:22-10.0.0.1:36686.service. Jul 11 00:21:15.600800 sshd[1404]: Accepted publickey for core from 10.0.0.1 port 36686 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:21:15.602793 sshd[1404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:21:15.610569 systemd[1]: Created slice user-500.slice. Jul 11 00:21:15.611511 systemd[1]: Starting user-runtime-dir@500.service... Jul 11 00:21:15.613389 systemd-logind[1302]: New session 1 of user core. Jul 11 00:21:15.620556 systemd[1]: Finished user-runtime-dir@500.service. Jul 11 00:21:15.621745 systemd[1]: Starting user@500.service... Jul 11 00:21:15.624772 (systemd)[1409]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:21:15.691633 systemd[1409]: Queued start job for default target default.target. Jul 11 00:21:15.691848 systemd[1409]: Reached target paths.target. Jul 11 00:21:15.691875 systemd[1409]: Reached target sockets.target. Jul 11 00:21:15.691910 systemd[1409]: Reached target timers.target. Jul 11 00:21:15.691922 systemd[1409]: Reached target basic.target. Jul 11 00:21:15.691967 systemd[1409]: Reached target default.target. Jul 11 00:21:15.691989 systemd[1409]: Startup finished in 61ms. Jul 11 00:21:15.692068 systemd[1]: Started user@500.service. Jul 11 00:21:15.693009 systemd[1]: Started session-1.scope. Jul 11 00:21:15.742342 systemd[1]: Started sshd@1-10.0.0.33:22-10.0.0.1:36702.service. Jul 11 00:21:15.791979 sshd[1418]: Accepted publickey for core from 10.0.0.1 port 36702 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:21:15.793558 sshd[1418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:21:15.798051 systemd[1]: Started session-2.scope. Jul 11 00:21:15.798436 systemd-logind[1302]: New session 2 of user core. Jul 11 00:21:15.852330 sshd[1418]: pam_unix(sshd:session): session closed for user core Jul 11 00:21:15.854635 systemd[1]: Started sshd@2-10.0.0.33:22-10.0.0.1:36708.service. Jul 11 00:21:15.855286 systemd[1]: sshd@1-10.0.0.33:22-10.0.0.1:36702.service: Deactivated successfully. Jul 11 00:21:15.856220 systemd-logind[1302]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:21:15.856236 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:21:15.856940 systemd-logind[1302]: Removed session 2. Jul 11 00:21:15.897168 sshd[1423]: Accepted publickey for core from 10.0.0.1 port 36708 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:21:15.898641 sshd[1423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:21:15.901863 systemd-logind[1302]: New session 3 of user core. Jul 11 00:21:15.902624 systemd[1]: Started session-3.scope. Jul 11 00:21:15.951346 sshd[1423]: pam_unix(sshd:session): session closed for user core Jul 11 00:21:15.953536 systemd[1]: Started sshd@3-10.0.0.33:22-10.0.0.1:36716.service. Jul 11 00:21:15.954562 systemd[1]: sshd@2-10.0.0.33:22-10.0.0.1:36708.service: Deactivated successfully. Jul 11 00:21:15.955630 systemd-logind[1302]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:21:15.955903 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:21:15.956831 systemd-logind[1302]: Removed session 3. Jul 11 00:21:15.995458 sshd[1430]: Accepted publickey for core from 10.0.0.1 port 36716 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:21:15.996816 sshd[1430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:21:16.000406 systemd-logind[1302]: New session 4 of user core. Jul 11 00:21:16.002408 systemd[1]: Started session-4.scope. Jul 11 00:21:16.056425 sshd[1430]: pam_unix(sshd:session): session closed for user core Jul 11 00:21:16.059123 systemd[1]: sshd@3-10.0.0.33:22-10.0.0.1:36716.service: Deactivated successfully. Jul 11 00:21:16.060133 systemd-logind[1302]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:21:16.061756 systemd[1]: Started sshd@4-10.0.0.33:22-10.0.0.1:36728.service. Jul 11 00:21:16.062738 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:21:16.063650 systemd-logind[1302]: Removed session 4. Jul 11 00:21:16.103123 sshd[1439]: Accepted publickey for core from 10.0.0.1 port 36728 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:21:16.104504 sshd[1439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:21:16.107839 systemd-logind[1302]: New session 5 of user core. Jul 11 00:21:16.109745 systemd[1]: Started session-5.scope. Jul 11 00:21:16.169602 sudo[1443]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:21:16.169821 sudo[1443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 11 00:21:16.187050 dbus-daemon[1292]: avc: received setenforce notice (enforcing=1) Jul 11 00:21:16.187932 sudo[1443]: pam_unix(sudo:session): session closed for user root Jul 11 00:21:16.189842 sshd[1439]: pam_unix(sshd:session): session closed for user core Jul 11 00:21:16.192071 systemd[1]: Started sshd@5-10.0.0.33:22-10.0.0.1:36740.service. Jul 11 00:21:16.192624 systemd[1]: sshd@4-10.0.0.33:22-10.0.0.1:36728.service: Deactivated successfully. Jul 11 00:21:16.193535 systemd-logind[1302]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:21:16.193578 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:21:16.194326 systemd-logind[1302]: Removed session 5. Jul 11 00:21:16.234350 sshd[1445]: Accepted publickey for core from 10.0.0.1 port 36740 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:21:16.235476 sshd[1445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:21:16.238465 systemd-logind[1302]: New session 6 of user core. Jul 11 00:21:16.239213 systemd[1]: Started session-6.scope. Jul 11 00:21:16.290490 sudo[1452]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:21:16.291027 sudo[1452]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 11 00:21:16.293948 sudo[1452]: pam_unix(sudo:session): session closed for user root Jul 11 00:21:16.298181 sudo[1451]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 11 00:21:16.298415 sudo[1451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 11 00:21:16.306753 systemd[1]: Stopping audit-rules.service... Jul 11 00:21:16.306000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 11 00:21:16.308532 kernel: kauditd_printk_skb: 91 callbacks suppressed Jul 11 00:21:16.308570 kernel: audit: type=1305 audit(1752193276.306:152): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 11 00:21:16.308695 auditctl[1455]: No rules Jul 11 00:21:16.308999 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:21:16.309204 systemd[1]: Stopped audit-rules.service. Jul 11 00:21:16.306000 audit[1455]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff9ed5460 a2=420 a3=0 items=0 ppid=1 pid=1455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.310591 systemd[1]: Starting audit-rules.service... Jul 11 00:21:16.313663 kernel: audit: type=1300 audit(1752193276.306:152): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff9ed5460 a2=420 a3=0 items=0 ppid=1 pid=1455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.313716 kernel: audit: type=1327 audit(1752193276.306:152): proctitle=2F7362696E2F617564697463746C002D44 Jul 11 00:21:16.306000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 11 00:21:16.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:16.317249 kernel: audit: type=1131 audit(1752193276.308:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:16.327705 augenrules[1473]: No rules Jul 11 00:21:16.328461 systemd[1]: Finished audit-rules.service. Jul 11 00:21:16.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:16.328000 audit[1451]: USER_END pid=1451 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 11 00:21:16.329166 sudo[1451]: pam_unix(sudo:session): session closed for user root Jul 11 00:21:16.332698 sshd[1445]: pam_unix(sshd:session): session closed for user core Jul 11 00:21:16.334225 kernel: audit: type=1130 audit(1752193276.327:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:16.334287 kernel: audit: type=1106 audit(1752193276.328:155): pid=1451 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 11 00:21:16.334306 kernel: audit: type=1104 audit(1752193276.328:156): pid=1451 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 11 00:21:16.328000 audit[1451]: CRED_DISP pid=1451 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 11 00:21:16.334919 systemd[1]: Started sshd@6-10.0.0.33:22-10.0.0.1:36750.service. Jul 11 00:21:16.336068 systemd[1]: sshd@5-10.0.0.33:22-10.0.0.1:36740.service: Deactivated successfully. Jul 11 00:21:16.333000 audit[1445]: USER_END pid=1445 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:21:16.338092 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:21:16.338392 systemd-logind[1302]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:21:16.339182 systemd-logind[1302]: Removed session 6. Jul 11 00:21:16.340409 kernel: audit: type=1106 audit(1752193276.333:157): pid=1445 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:21:16.340459 kernel: audit: type=1104 audit(1752193276.333:158): pid=1445 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:21:16.333000 audit[1445]: CRED_DISP pid=1445 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:21:16.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.33:22-10.0.0.1:36750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:16.345846 kernel: audit: type=1130 audit(1752193276.334:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.33:22-10.0.0.1:36750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:16.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.33:22-10.0.0.1:36740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:16.376000 audit[1478]: USER_ACCT pid=1478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:21:16.377794 sshd[1478]: Accepted publickey for core from 10.0.0.1 port 36750 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:21:16.379812 sshd[1478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:21:16.378000 audit[1478]: CRED_ACQ pid=1478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:21:16.378000 audit[1478]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7f20160 a2=3 a3=1 items=0 ppid=1 pid=1478 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.378000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:21:16.383864 systemd[1]: Started session-7.scope. Jul 11 00:21:16.384091 systemd-logind[1302]: New session 7 of user core. Jul 11 00:21:16.386000 audit[1478]: USER_START pid=1478 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:21:16.387000 audit[1483]: CRED_ACQ pid=1483 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:21:16.434000 audit[1484]: USER_ACCT pid=1484 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 11 00:21:16.435178 sudo[1484]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:21:16.434000 audit[1484]: CRED_REFR pid=1484 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 11 00:21:16.435766 sudo[1484]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 11 00:21:16.436000 audit[1484]: USER_START pid=1484 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 11 00:21:16.499999 systemd[1]: Starting docker.service... Jul 11 00:21:16.583945 env[1496]: time="2025-07-11T00:21:16.583876363Z" level=info msg="Starting up" Jul 11 00:21:16.585657 env[1496]: time="2025-07-11T00:21:16.585631277Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 11 00:21:16.585657 env[1496]: time="2025-07-11T00:21:16.585654140Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 11 00:21:16.585762 env[1496]: time="2025-07-11T00:21:16.585673086Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 11 00:21:16.585762 env[1496]: time="2025-07-11T00:21:16.585682998Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 11 00:21:16.587788 env[1496]: time="2025-07-11T00:21:16.587762787Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 11 00:21:16.587788 env[1496]: time="2025-07-11T00:21:16.587783572Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 11 00:21:16.587862 env[1496]: time="2025-07-11T00:21:16.587796642Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 11 00:21:16.587862 env[1496]: time="2025-07-11T00:21:16.587805115Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 11 00:21:16.775281 env[1496]: time="2025-07-11T00:21:16.775241847Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 11 00:21:16.775281 env[1496]: time="2025-07-11T00:21:16.775268786Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 11 00:21:16.775465 env[1496]: time="2025-07-11T00:21:16.775401766Z" level=info msg="Loading containers: start." Jul 11 00:21:16.823000 audit[1530]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.823000 audit[1530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffdc3cacc0 a2=0 a3=1 items=0 ppid=1496 pid=1530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.823000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 11 00:21:16.825000 audit[1532]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.825000 audit[1532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff72caaf0 a2=0 a3=1 items=0 ppid=1496 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.825000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 11 00:21:16.826000 audit[1534]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.826000 audit[1534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffce49db20 a2=0 a3=1 items=0 ppid=1496 pid=1534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.826000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 11 00:21:16.828000 audit[1536]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.828000 audit[1536]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe29558c0 a2=0 a3=1 items=0 ppid=1496 pid=1536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.828000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 11 00:21:16.836000 audit[1538]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.836000 audit[1538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff09a7460 a2=0 a3=1 items=0 ppid=1496 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.836000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 11 00:21:16.858000 audit[1543]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.858000 audit[1543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc205d600 a2=0 a3=1 items=0 ppid=1496 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.858000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 11 00:21:16.865000 audit[1545]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.865000 audit[1545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe9ff71f0 a2=0 a3=1 items=0 ppid=1496 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.865000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 11 00:21:16.866000 audit[1547]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.866000 audit[1547]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc82734b0 a2=0 a3=1 items=0 ppid=1496 pid=1547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.866000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 11 00:21:16.868000 audit[1549]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.868000 audit[1549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffedfdaa40 a2=0 a3=1 items=0 ppid=1496 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.868000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 11 00:21:16.874000 audit[1553]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.874000 audit[1553]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffef4ea350 a2=0 a3=1 items=0 ppid=1496 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.874000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 11 00:21:16.890000 audit[1554]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.890000 audit[1554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffef3575d0 a2=0 a3=1 items=0 ppid=1496 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.890000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 11 00:21:16.900911 kernel: Initializing XFRM netlink socket Jul 11 00:21:16.922793 env[1496]: time="2025-07-11T00:21:16.922744784Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 11 00:21:16.937000 audit[1562]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.937000 audit[1562]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffe5630a40 a2=0 a3=1 items=0 ppid=1496 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.937000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 11 00:21:16.952000 audit[1565]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.952000 audit[1565]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffff3914a70 a2=0 a3=1 items=0 ppid=1496 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.952000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 11 00:21:16.955000 audit[1568]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.955000 audit[1568]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd21ae620 a2=0 a3=1 items=0 ppid=1496 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.955000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 11 00:21:16.957000 audit[1570]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.957000 audit[1570]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd4f0a280 a2=0 a3=1 items=0 ppid=1496 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.957000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 11 00:21:16.959000 audit[1572]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.959000 audit[1572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffff4c32b0 a2=0 a3=1 items=0 ppid=1496 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.959000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 11 00:21:16.961000 audit[1574]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.961000 audit[1574]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffe3f3abe0 a2=0 a3=1 items=0 ppid=1496 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.961000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 11 00:21:16.962000 audit[1576]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.962000 audit[1576]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=fffffc120c20 a2=0 a3=1 items=0 ppid=1496 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.962000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 11 00:21:16.969000 audit[1579]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.969000 audit[1579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffd94579d0 a2=0 a3=1 items=0 ppid=1496 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.969000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 11 00:21:16.971000 audit[1581]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.971000 audit[1581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffdfcb0f00 a2=0 a3=1 items=0 ppid=1496 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.971000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 11 00:21:16.973000 audit[1583]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.973000 audit[1583]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffe9763880 a2=0 a3=1 items=0 ppid=1496 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.973000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 11 00:21:16.974000 audit[1585]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.974000 audit[1585]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffff86c2a0 a2=0 a3=1 items=0 ppid=1496 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.974000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 11 00:21:16.976128 systemd-networkd[1099]: docker0: Link UP Jul 11 00:21:16.982000 audit[1589]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.982000 audit[1589]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff54fdad0 a2=0 a3=1 items=0 ppid=1496 pid=1589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.982000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 11 00:21:16.995000 audit[1590]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:16.995000 audit[1590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc1eb8f10 a2=0 a3=1 items=0 ppid=1496 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:16.995000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 11 00:21:16.997219 env[1496]: time="2025-07-11T00:21:16.997167765Z" level=info msg="Loading containers: done." Jul 11 00:21:17.013803 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1461337262-merged.mount: Deactivated successfully. Jul 11 00:21:17.016667 env[1496]: time="2025-07-11T00:21:17.016624131Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:21:17.016826 env[1496]: time="2025-07-11T00:21:17.016801486Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 11 00:21:17.016955 env[1496]: time="2025-07-11T00:21:17.016933392Z" level=info msg="Daemon has completed initialization" Jul 11 00:21:17.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:17.032108 systemd[1]: Started docker.service. Jul 11 00:21:17.039135 env[1496]: time="2025-07-11T00:21:17.039072794Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:21:17.634586 env[1316]: time="2025-07-11T00:21:17.634512495Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 11 00:21:18.310530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4241630121.mount: Deactivated successfully. Jul 11 00:21:19.610141 env[1316]: time="2025-07-11T00:21:19.610093742Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:19.611556 env[1316]: time="2025-07-11T00:21:19.611520534Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:19.613620 env[1316]: time="2025-07-11T00:21:19.613580852Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:19.615305 env[1316]: time="2025-07-11T00:21:19.615277956Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:19.616087 env[1316]: time="2025-07-11T00:21:19.616044638Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 11 00:21:19.620836 env[1316]: time="2025-07-11T00:21:19.620807754Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 11 00:21:20.971607 env[1316]: time="2025-07-11T00:21:20.971517173Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:20.972995 env[1316]: time="2025-07-11T00:21:20.972932227Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:20.974640 env[1316]: time="2025-07-11T00:21:20.974599814Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:20.976939 env[1316]: time="2025-07-11T00:21:20.976908987Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:20.977746 env[1316]: time="2025-07-11T00:21:20.977713598Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 11 00:21:20.978272 env[1316]: time="2025-07-11T00:21:20.978225859Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 11 00:21:22.228642 env[1316]: time="2025-07-11T00:21:22.228577196Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:22.230348 env[1316]: time="2025-07-11T00:21:22.230308828Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:22.232367 env[1316]: time="2025-07-11T00:21:22.232337988Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:22.234205 env[1316]: time="2025-07-11T00:21:22.234181563Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:22.235856 env[1316]: time="2025-07-11T00:21:22.235819523Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 11 00:21:22.236445 env[1316]: time="2025-07-11T00:21:22.236418056Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 11 00:21:22.721004 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:21:22.721173 systemd[1]: Stopped kubelet.service. Jul 11 00:21:22.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:22.722035 kernel: kauditd_printk_skb: 84 callbacks suppressed Jul 11 00:21:22.722095 kernel: audit: type=1130 audit(1752193282.720:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:22.722722 systemd[1]: Starting kubelet.service... Jul 11 00:21:22.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:22.727892 kernel: audit: type=1131 audit(1752193282.720:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:22.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:22.826092 systemd[1]: Started kubelet.service. Jul 11 00:21:22.831925 kernel: audit: type=1130 audit(1752193282.825:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:22.871008 kubelet[1635]: E0711 00:21:22.870960 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:21:22.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 11 00:21:22.873540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:21:22.873672 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:21:22.876911 kernel: audit: type=1131 audit(1752193282.873:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 11 00:21:23.294661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4198504039.mount: Deactivated successfully. Jul 11 00:21:23.878175 env[1316]: time="2025-07-11T00:21:23.878126695Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:23.880183 env[1316]: time="2025-07-11T00:21:23.880142206Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:23.881596 env[1316]: time="2025-07-11T00:21:23.881561244Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:23.882807 env[1316]: time="2025-07-11T00:21:23.882772981Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:23.883226 env[1316]: time="2025-07-11T00:21:23.883195098Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 11 00:21:23.883643 env[1316]: time="2025-07-11T00:21:23.883602742Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 00:21:24.446466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2239160369.mount: Deactivated successfully. Jul 11 00:21:25.379489 env[1316]: time="2025-07-11T00:21:25.379441657Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:25.382578 env[1316]: time="2025-07-11T00:21:25.382542108Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:25.385116 env[1316]: time="2025-07-11T00:21:25.385079996Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:25.387502 env[1316]: time="2025-07-11T00:21:25.387478262Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:25.388367 env[1316]: time="2025-07-11T00:21:25.388329063Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 11 00:21:25.389353 env[1316]: time="2025-07-11T00:21:25.389315326Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:21:25.862848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166212305.mount: Deactivated successfully. Jul 11 00:21:25.865858 env[1316]: time="2025-07-11T00:21:25.865820789Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:25.867158 env[1316]: time="2025-07-11T00:21:25.867132715Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:25.868878 env[1316]: time="2025-07-11T00:21:25.868853388Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:25.870315 env[1316]: time="2025-07-11T00:21:25.870287982Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:25.870841 env[1316]: time="2025-07-11T00:21:25.870818238Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 11 00:21:25.871355 env[1316]: time="2025-07-11T00:21:25.871324464Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 11 00:21:26.308588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3987495822.mount: Deactivated successfully. Jul 11 00:21:28.653783 env[1316]: time="2025-07-11T00:21:28.653724283Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:28.655185 env[1316]: time="2025-07-11T00:21:28.655154585Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:28.657081 env[1316]: time="2025-07-11T00:21:28.657052244Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:28.659676 env[1316]: time="2025-07-11T00:21:28.659645222Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:28.660545 env[1316]: time="2025-07-11T00:21:28.660515039Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 11 00:21:32.971046 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 00:21:32.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:32.971219 systemd[1]: Stopped kubelet.service. Jul 11 00:21:32.972672 systemd[1]: Starting kubelet.service... Jul 11 00:21:32.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:32.976130 kernel: audit: type=1130 audit(1752193292.970:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:32.976206 kernel: audit: type=1131 audit(1752193292.970:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:33.068256 systemd[1]: Started kubelet.service. Jul 11 00:21:33.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:33.071908 kernel: audit: type=1130 audit(1752193293.067:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:33.102272 kubelet[1672]: E0711 00:21:33.102213 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:21:33.104222 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:21:33.104366 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:21:33.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 11 00:21:33.107918 kernel: audit: type=1131 audit(1752193293.103:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 11 00:21:33.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:33.643940 systemd[1]: Stopped kubelet.service. Jul 11 00:21:33.646040 systemd[1]: Starting kubelet.service... Jul 11 00:21:33.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:33.648964 kernel: audit: type=1130 audit(1752193293.643:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:33.649034 kernel: audit: type=1131 audit(1752193293.643:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:33.669445 systemd[1]: Reloading. Jul 11 00:21:33.722658 /usr/lib/systemd/system-generators/torcx-generator[1708]: time="2025-07-11T00:21:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 11 00:21:33.722689 /usr/lib/systemd/system-generators/torcx-generator[1708]: time="2025-07-11T00:21:33Z" level=info msg="torcx already run" Jul 11 00:21:33.790081 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 11 00:21:33.790101 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 11 00:21:33.805566 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:21:33.870163 systemd[1]: Started kubelet.service. Jul 11 00:21:33.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:33.873624 systemd[1]: Stopping kubelet.service... Jul 11 00:21:33.873906 kernel: audit: type=1130 audit(1752193293.869:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:33.875243 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:21:33.875486 systemd[1]: Stopped kubelet.service. Jul 11 00:21:33.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:33.876945 systemd[1]: Starting kubelet.service... Jul 11 00:21:33.878900 kernel: audit: type=1131 audit(1752193293.874:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:33.969680 systemd[1]: Started kubelet.service. Jul 11 00:21:33.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:33.973907 kernel: audit: type=1130 audit(1752193293.969:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:34.009132 kubelet[1770]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:21:34.009488 kubelet[1770]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:21:34.009546 kubelet[1770]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:21:34.009670 kubelet[1770]: I0711 00:21:34.009635 1770 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:21:34.627958 kubelet[1770]: I0711 00:21:34.627914 1770 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:21:34.627958 kubelet[1770]: I0711 00:21:34.627949 1770 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:21:34.628505 kubelet[1770]: I0711 00:21:34.628477 1770 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:21:34.661437 kubelet[1770]: E0711 00:21:34.661402 1770 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:21:34.661700 kubelet[1770]: I0711 00:21:34.661656 1770 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:21:34.666862 kubelet[1770]: E0711 00:21:34.666828 1770 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:21:34.666862 kubelet[1770]: I0711 00:21:34.666860 1770 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:21:34.670444 kubelet[1770]: I0711 00:21:34.670425 1770 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:21:34.670940 kubelet[1770]: I0711 00:21:34.670925 1770 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:21:34.671062 kubelet[1770]: I0711 00:21:34.671031 1770 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:21:34.671219 kubelet[1770]: I0711 00:21:34.671057 1770 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 11 00:21:34.671300 kubelet[1770]: I0711 00:21:34.671288 1770 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:21:34.671300 kubelet[1770]: I0711 00:21:34.671297 1770 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:21:34.671473 kubelet[1770]: I0711 00:21:34.671459 1770 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:21:34.677418 kubelet[1770]: I0711 00:21:34.677387 1770 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:21:34.677477 kubelet[1770]: I0711 00:21:34.677425 1770 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:21:34.677477 kubelet[1770]: I0711 00:21:34.677452 1770 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:21:34.677548 kubelet[1770]: I0711 00:21:34.677537 1770 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:21:34.678192 kubelet[1770]: W0711 00:21:34.678055 1770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 11 00:21:34.678251 kubelet[1770]: E0711 00:21:34.678199 1770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:21:34.678251 kubelet[1770]: W0711 00:21:34.678067 1770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 11 00:21:34.678251 kubelet[1770]: E0711 00:21:34.678227 1770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:21:34.681298 kubelet[1770]: I0711 00:21:34.681273 1770 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 11 00:21:34.682159 kubelet[1770]: I0711 00:21:34.682122 1770 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:21:34.682381 kubelet[1770]: W0711 00:21:34.682368 1770 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:21:34.683409 kubelet[1770]: I0711 00:21:34.683393 1770 server.go:1274] "Started kubelet" Jul 11 00:21:34.704498 kubelet[1770]: I0711 00:21:34.704467 1770 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 11 00:21:34.704553 kubelet[1770]: I0711 00:21:34.704505 1770 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 11 00:21:34.704580 kubelet[1770]: I0711 00:21:34.704558 1770 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:21:34.702000 audit[1770]: AVC avc: denied { mac_admin } for pid=1770 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:21:34.702000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 11 00:21:34.702000 audit[1770]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40006b9470 a1=4000c7a768 a2=40006b9440 a3=25 items=0 ppid=1 pid=1770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:34.702000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 11 00:21:34.702000 audit[1770]: AVC avc: denied { mac_admin } for pid=1770 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:21:34.702000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 11 00:21:34.702000 audit[1770]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400074c5e0 a1=4000c7a780 a2=40006b9500 a3=25 items=0 ppid=1 pid=1770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:34.702000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 11 00:21:34.705000 audit[1783]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:34.705000 audit[1783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdf6b6b80 a2=0 a3=1 items=0 ppid=1770 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:34.705000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 11 00:21:34.706000 audit[1784]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:34.708919 kernel: audit: type=1400 audit(1752193294.702:207): avc: denied { mac_admin } for pid=1770 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:21:34.706000 audit[1784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff8e8ebe0 a2=0 a3=1 items=0 ppid=1770 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:34.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 11 00:21:34.709385 kubelet[1770]: I0711 00:21:34.709326 1770 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:21:34.710091 kubelet[1770]: I0711 00:21:34.710037 1770 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:21:34.710330 kubelet[1770]: E0711 00:21:34.710306 1770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:21:34.710463 kubelet[1770]: I0711 00:21:34.710441 1770 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:21:34.711397 kubelet[1770]: I0711 00:21:34.711346 1770 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:21:34.711595 kubelet[1770]: I0711 00:21:34.711578 1770 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:21:34.711753 kubelet[1770]: I0711 00:21:34.711736 1770 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:21:34.712000 audit[1786]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:34.712000 audit[1786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdf7ece60 a2=0 a3=1 items=0 ppid=1770 pid=1786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:34.712000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 11 00:21:34.715000 audit[1788]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1788 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:34.715000 audit[1788]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd9cc3e60 a2=0 a3=1 items=0 ppid=1770 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:34.715000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 11 00:21:34.718447 kubelet[1770]: I0711 00:21:34.718426 1770 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:21:34.718501 kubelet[1770]: I0711 00:21:34.718495 1770 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:21:34.718589 kubelet[1770]: E0711 00:21:34.710006 1770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.33:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.33:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510a849b6c7acf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:21:34.683372239 +0000 UTC m=+0.710698471,LastTimestamp:2025-07-11 00:21:34.683372239 +0000 UTC m=+0.710698471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:21:34.718589 kubelet[1770]: E0711 00:21:34.718562 1770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="200ms" Jul 11 00:21:34.718690 kubelet[1770]: W0711 00:21:34.718656 1770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 11 00:21:34.718715 kubelet[1770]: E0711 00:21:34.718695 1770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:21:34.718757 kubelet[1770]: I0711 00:21:34.718737 1770 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:21:34.718918 kubelet[1770]: I0711 00:21:34.718840 1770 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:21:34.720175 kubelet[1770]: E0711 00:21:34.720155 1770 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:21:34.720329 kubelet[1770]: I0711 00:21:34.720311 1770 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:21:34.725000 audit[1792]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1792 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:34.725000 audit[1792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffc7ffca90 a2=0 a3=1 items=0 ppid=1770 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:34.725000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 11 00:21:34.727411 kubelet[1770]: I0711 00:21:34.727381 1770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:21:34.726000 audit[1793]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1793 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:34.726000 audit[1793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffca9f2a50 a2=0 a3=1 items=0 ppid=1770 pid=1793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:34.726000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 11 00:21:34.728260 kubelet[1770]: I0711 00:21:34.728245 1770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:21:34.728287 kubelet[1770]: I0711 00:21:34.728262 1770 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:21:34.728287 kubelet[1770]: I0711 00:21:34.728278 1770 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:21:34.728331 kubelet[1770]: E0711 00:21:34.728315 1770 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:21:34.727000 audit[1794]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1794 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:34.727000 audit[1794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd3d9b4d0 a2=0 a3=1 items=0 ppid=1770 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:34.727000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 11 00:21:34.728000 audit[1795]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1795 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:34.728000 audit[1795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffcbc94d0 a2=0 a3=1 items=0 ppid=1770 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:34.728000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 11 00:21:34.729000 audit[1796]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:34.729000 audit[1796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd41cf0c0 a2=0 a3=1 items=0 ppid=1770 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:34.729000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 11 00:21:34.730000 audit[1797]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1797 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:34.730000 audit[1797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe4ad6d0 a2=0 a3=1 items=0 ppid=1770 pid=1797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:34.730000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 11 00:21:34.731000 audit[1798]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1798 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:34.731000 audit[1798]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffeed97f20 a2=0 a3=1 items=0 ppid=1770 pid=1798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:34.731000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 11 00:21:34.732000 audit[1799]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1799 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:34.732000 audit[1799]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffe6b9a10 a2=0 a3=1 items=0 ppid=1770 pid=1799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:34.732000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 11 00:21:34.735754 kubelet[1770]: W0711 00:21:34.735710 1770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 11 00:21:34.735828 kubelet[1770]: E0711 00:21:34.735767 1770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:21:34.736827 kubelet[1770]: I0711 00:21:34.736814 1770 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:21:34.736827 kubelet[1770]: I0711 00:21:34.736828 1770 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:21:34.736919 kubelet[1770]: I0711 00:21:34.736872 1770 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:21:34.811138 kubelet[1770]: E0711 00:21:34.811090 1770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:21:34.828574 kubelet[1770]: E0711 00:21:34.828545 1770 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:21:34.868601 kubelet[1770]: I0711 00:21:34.868571 1770 policy_none.go:49] "None policy: Start" Jul 11 00:21:34.869393 kubelet[1770]: I0711 00:21:34.869366 1770 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:21:34.869430 kubelet[1770]: I0711 00:21:34.869398 1770 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:21:34.874556 kubelet[1770]: I0711 00:21:34.874520 1770 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:21:34.873000 audit[1770]: AVC avc: denied { mac_admin } for pid=1770 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:21:34.873000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 11 00:21:34.873000 audit[1770]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400102db30 a1=400101b1e8 a2=400102db00 a3=25 items=0 ppid=1 pid=1770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:34.873000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 11 00:21:34.874820 kubelet[1770]: I0711 00:21:34.874587 1770 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 11 00:21:34.874820 kubelet[1770]: I0711 00:21:34.874702 1770 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:21:34.874820 kubelet[1770]: I0711 00:21:34.874712 1770 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:21:34.875499 kubelet[1770]: I0711 00:21:34.875465 1770 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:21:34.876576 kubelet[1770]: E0711 00:21:34.876543 1770 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:21:34.920061 kubelet[1770]: E0711 00:21:34.919415 1770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="400ms" Jul 11 00:21:34.976158 kubelet[1770]: I0711 00:21:34.976124 1770 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:21:34.976645 kubelet[1770]: E0711 00:21:34.976611 1770 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Jul 11 00:21:35.001165 kubelet[1770]: E0711 00:21:35.001063 1770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.33:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.33:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510a849b6c7acf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:21:34.683372239 +0000 UTC m=+0.710698471,LastTimestamp:2025-07-11 00:21:34.683372239 +0000 UTC m=+0.710698471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:21:35.121025 kubelet[1770]: I0711 00:21:35.120987 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:21:35.121416 kubelet[1770]: I0711 00:21:35.121395 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:21:35.121544 kubelet[1770]: I0711 00:21:35.121527 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:21:35.121631 kubelet[1770]: I0711 00:21:35.121617 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:21:35.121719 kubelet[1770]: I0711 00:21:35.121706 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c699b0927dce4d6d32bc978e9b69d15b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c699b0927dce4d6d32bc978e9b69d15b\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:21:35.121816 kubelet[1770]: I0711 00:21:35.121803 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:21:35.121921 kubelet[1770]: I0711 00:21:35.121907 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:21:35.122022 kubelet[1770]: I0711 00:21:35.122008 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c699b0927dce4d6d32bc978e9b69d15b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c699b0927dce4d6d32bc978e9b69d15b\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:21:35.122108 kubelet[1770]: I0711 00:21:35.122096 1770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c699b0927dce4d6d32bc978e9b69d15b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c699b0927dce4d6d32bc978e9b69d15b\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:21:35.178306 kubelet[1770]: I0711 00:21:35.178211 1770 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:21:35.179043 kubelet[1770]: E0711 00:21:35.179012 1770 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Jul 11 00:21:35.320492 kubelet[1770]: E0711 00:21:35.320443 1770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="800ms" Jul 11 00:21:35.335022 kubelet[1770]: E0711 00:21:35.334998 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:35.335771 env[1316]: time="2025-07-11T00:21:35.335724845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c699b0927dce4d6d32bc978e9b69d15b,Namespace:kube-system,Attempt:0,}" Jul 11 00:21:35.336238 kubelet[1770]: E0711 00:21:35.336220 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:35.336383 kubelet[1770]: E0711 00:21:35.336363 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:35.336652 env[1316]: time="2025-07-11T00:21:35.336601571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 11 00:21:35.337024 env[1316]: time="2025-07-11T00:21:35.336904504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 11 00:21:35.527210 kubelet[1770]: W0711 00:21:35.527150 1770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 11 00:21:35.527410 kubelet[1770]: E0711 00:21:35.527387 1770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:21:35.559062 kubelet[1770]: W0711 00:21:35.559012 1770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 11 00:21:35.559218 kubelet[1770]: E0711 00:21:35.559198 1770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:21:35.580563 kubelet[1770]: I0711 00:21:35.580539 1770 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:21:35.581043 kubelet[1770]: E0711 00:21:35.581018 1770 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Jul 11 00:21:35.841044 kubelet[1770]: W0711 00:21:35.840876 1770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 11 00:21:35.841044 kubelet[1770]: E0711 00:21:35.840969 1770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:21:35.964565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount959381969.mount: Deactivated successfully. Jul 11 00:21:35.968362 env[1316]: time="2025-07-11T00:21:35.968326876Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:35.971839 env[1316]: time="2025-07-11T00:21:35.971805706Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:35.973119 env[1316]: time="2025-07-11T00:21:35.973092061Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:35.973729 env[1316]: time="2025-07-11T00:21:35.973709564Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:35.975121 env[1316]: time="2025-07-11T00:21:35.975093418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:35.976557 env[1316]: time="2025-07-11T00:21:35.976528380Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:35.978305 env[1316]: time="2025-07-11T00:21:35.978277592Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:35.979258 env[1316]: time="2025-07-11T00:21:35.979227822Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:35.980576 env[1316]: time="2025-07-11T00:21:35.980546010Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:35.981956 env[1316]: time="2025-07-11T00:21:35.981929863Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:35.983562 env[1316]: time="2025-07-11T00:21:35.983536707Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:35.990580 env[1316]: time="2025-07-11T00:21:35.989043008Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:35.999737 kubelet[1770]: W0711 00:21:35.999671 1770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 11 00:21:35.999822 kubelet[1770]: E0711 00:21:35.999744 1770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:21:36.011209 env[1316]: time="2025-07-11T00:21:36.011136975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:21:36.011209 env[1316]: time="2025-07-11T00:21:36.011175567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:21:36.011209 env[1316]: time="2025-07-11T00:21:36.011185805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:21:36.011850 env[1316]: time="2025-07-11T00:21:36.011442432Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f09cf3afed8485df1f00985502410874c2ec8bc5e88a0f35ec9b22b7f8292c7 pid=1822 runtime=io.containerd.runc.v2 Jul 11 00:21:36.012585 env[1316]: time="2025-07-11T00:21:36.012523927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:21:36.012585 env[1316]: time="2025-07-11T00:21:36.012568918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:21:36.012714 env[1316]: time="2025-07-11T00:21:36.012682854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:21:36.013019 env[1316]: time="2025-07-11T00:21:36.012979913Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2c9b27afdc426396f51361ef1c5d13511ce02ce46236b5941dab9342b6dcca6 pid=1821 runtime=io.containerd.runc.v2 Jul 11 00:21:36.014520 env[1316]: time="2025-07-11T00:21:36.014455806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:21:36.014520 env[1316]: time="2025-07-11T00:21:36.014488879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:21:36.014520 env[1316]: time="2025-07-11T00:21:36.014503836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:21:36.014940 env[1316]: time="2025-07-11T00:21:36.014864121Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ffd55741299b14a7913f619ab1c4e2b75a20db4c9d1b048382d9521fb697f7c pid=1842 runtime=io.containerd.runc.v2 Jul 11 00:21:36.096383 env[1316]: time="2025-07-11T00:21:36.096266657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2c9b27afdc426396f51361ef1c5d13511ce02ce46236b5941dab9342b6dcca6\"" Jul 11 00:21:36.097187 env[1316]: time="2025-07-11T00:21:36.097082008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f09cf3afed8485df1f00985502410874c2ec8bc5e88a0f35ec9b22b7f8292c7\"" Jul 11 00:21:36.098323 kubelet[1770]: E0711 00:21:36.098295 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:36.098786 kubelet[1770]: E0711 00:21:36.098757 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:36.100022 env[1316]: time="2025-07-11T00:21:36.099990644Z" level=info msg="CreateContainer within sandbox \"c2c9b27afdc426396f51361ef1c5d13511ce02ce46236b5941dab9342b6dcca6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:21:36.100359 env[1316]: time="2025-07-11T00:21:36.100327254Z" level=info msg="CreateContainer within sandbox \"8f09cf3afed8485df1f00985502410874c2ec8bc5e88a0f35ec9b22b7f8292c7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:21:36.103306 env[1316]: time="2025-07-11T00:21:36.103275322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c699b0927dce4d6d32bc978e9b69d15b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ffd55741299b14a7913f619ab1c4e2b75a20db4c9d1b048382d9521fb697f7c\"" Jul 11 00:21:36.103834 kubelet[1770]: E0711 00:21:36.103815 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:36.105308 env[1316]: time="2025-07-11T00:21:36.105274546Z" level=info msg="CreateContainer within sandbox \"7ffd55741299b14a7913f619ab1c4e2b75a20db4c9d1b048382d9521fb697f7c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:21:36.114835 env[1316]: time="2025-07-11T00:21:36.114783172Z" level=info msg="CreateContainer within sandbox \"c2c9b27afdc426396f51361ef1c5d13511ce02ce46236b5941dab9342b6dcca6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5ad6d39267a04991db287e2295cd2c6b86ff1f3cc0cfa424374a5267336d2064\"" Jul 11 00:21:36.115529 env[1316]: time="2025-07-11T00:21:36.115492505Z" level=info msg="StartContainer for \"5ad6d39267a04991db287e2295cd2c6b86ff1f3cc0cfa424374a5267336d2064\"" Jul 11 00:21:36.118218 env[1316]: time="2025-07-11T00:21:36.118181986Z" level=info msg="CreateContainer within sandbox \"8f09cf3afed8485df1f00985502410874c2ec8bc5e88a0f35ec9b22b7f8292c7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c4e81304064eb4666bcf4bf922747c31d24e8f7b907d7780c0c94ca259e926f7\"" Jul 11 00:21:36.118660 env[1316]: time="2025-07-11T00:21:36.118632213Z" level=info msg="StartContainer for \"c4e81304064eb4666bcf4bf922747c31d24e8f7b907d7780c0c94ca259e926f7\"" Jul 11 00:21:36.121239 kubelet[1770]: E0711 00:21:36.121192 1770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="1.6s" Jul 11 00:21:36.123715 env[1316]: time="2025-07-11T00:21:36.123667127Z" level=info msg="CreateContainer within sandbox \"7ffd55741299b14a7913f619ab1c4e2b75a20db4c9d1b048382d9521fb697f7c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8acf0605c80a704270d3774f18bc98215a42ea8b2a3601906c3eaffd89517a3c\"" Jul 11 00:21:36.124211 env[1316]: time="2025-07-11T00:21:36.124176741Z" level=info msg="StartContainer for \"8acf0605c80a704270d3774f18bc98215a42ea8b2a3601906c3eaffd89517a3c\"" Jul 11 00:21:36.225493 env[1316]: time="2025-07-11T00:21:36.221656298Z" level=info msg="StartContainer for \"5ad6d39267a04991db287e2295cd2c6b86ff1f3cc0cfa424374a5267336d2064\" returns successfully" Jul 11 00:21:36.244258 env[1316]: time="2025-07-11T00:21:36.243779144Z" level=info msg="StartContainer for \"8acf0605c80a704270d3774f18bc98215a42ea8b2a3601906c3eaffd89517a3c\" returns successfully" Jul 11 00:21:36.244485 env[1316]: time="2025-07-11T00:21:36.244451045Z" level=info msg="StartContainer for \"c4e81304064eb4666bcf4bf922747c31d24e8f7b907d7780c0c94ca259e926f7\" returns successfully" Jul 11 00:21:36.383108 kubelet[1770]: I0711 00:21:36.383017 1770 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:21:36.383396 kubelet[1770]: E0711 00:21:36.383314 1770 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Jul 11 00:21:36.741864 kubelet[1770]: E0711 00:21:36.741833 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:36.746970 kubelet[1770]: E0711 00:21:36.746943 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:36.748026 kubelet[1770]: E0711 00:21:36.748006 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:37.727684 kubelet[1770]: E0711 00:21:37.727639 1770 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:21:37.750292 kubelet[1770]: E0711 00:21:37.750266 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:37.981016 kubelet[1770]: E0711 00:21:37.980924 1770 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 11 00:21:37.985190 kubelet[1770]: I0711 00:21:37.985166 1770 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:21:37.992572 kubelet[1770]: I0711 00:21:37.992542 1770 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:21:38.679435 kubelet[1770]: I0711 00:21:38.679400 1770 apiserver.go:52] "Watching apiserver" Jul 11 00:21:38.719212 kubelet[1770]: I0711 00:21:38.719180 1770 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:21:39.442702 systemd[1]: Reloading. Jul 11 00:21:39.491922 /usr/lib/systemd/system-generators/torcx-generator[2065]: time="2025-07-11T00:21:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 11 00:21:39.491952 /usr/lib/systemd/system-generators/torcx-generator[2065]: time="2025-07-11T00:21:39Z" level=info msg="torcx already run" Jul 11 00:21:39.560847 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 11 00:21:39.560869 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 11 00:21:39.576382 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:21:39.647707 kubelet[1770]: I0711 00:21:39.647670 1770 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:21:39.647892 systemd[1]: Stopping kubelet.service... Jul 11 00:21:39.669269 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:21:39.669552 systemd[1]: Stopped kubelet.service. Jul 11 00:21:39.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:39.670304 kernel: kauditd_printk_skb: 47 callbacks suppressed Jul 11 00:21:39.670342 kernel: audit: type=1131 audit(1752193299.667:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:39.671314 systemd[1]: Starting kubelet.service... Jul 11 00:21:39.766448 systemd[1]: Started kubelet.service. Jul 11 00:21:39.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:39.769917 kernel: audit: type=1130 audit(1752193299.764:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:39.800659 kubelet[2117]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:21:39.800659 kubelet[2117]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:21:39.800659 kubelet[2117]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:21:39.801065 kubelet[2117]: I0711 00:21:39.800737 2117 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:21:39.807385 kubelet[2117]: I0711 00:21:39.807349 2117 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:21:39.807501 kubelet[2117]: I0711 00:21:39.807489 2117 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:21:39.807945 kubelet[2117]: I0711 00:21:39.807926 2117 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:21:39.810054 kubelet[2117]: I0711 00:21:39.810026 2117 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 00:21:39.812495 kubelet[2117]: I0711 00:21:39.812467 2117 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:21:39.815393 kubelet[2117]: E0711 00:21:39.815340 2117 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:21:39.815393 kubelet[2117]: I0711 00:21:39.815391 2117 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:21:39.817740 kubelet[2117]: I0711 00:21:39.817722 2117 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:21:39.818062 kubelet[2117]: I0711 00:21:39.818050 2117 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:21:39.818169 kubelet[2117]: I0711 00:21:39.818144 2117 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:21:39.818317 kubelet[2117]: I0711 00:21:39.818170 2117 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 11 00:21:39.818394 kubelet[2117]: I0711 00:21:39.818325 2117 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:21:39.818394 kubelet[2117]: I0711 00:21:39.818334 2117 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:21:39.818394 kubelet[2117]: I0711 00:21:39.818364 2117 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:21:39.818457 kubelet[2117]: I0711 00:21:39.818450 2117 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:21:39.818479 kubelet[2117]: I0711 00:21:39.818463 2117 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:21:39.818479 kubelet[2117]: I0711 00:21:39.818478 2117 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:21:39.818538 kubelet[2117]: I0711 00:21:39.818490 2117 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:21:39.818988 kubelet[2117]: I0711 00:21:39.818961 2117 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 11 00:21:39.823835 kubelet[2117]: I0711 00:21:39.823516 2117 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:21:39.824018 kubelet[2117]: I0711 00:21:39.823999 2117 server.go:1274] "Started kubelet" Jul 11 00:21:39.826380 kubelet[2117]: I0711 00:21:39.826352 2117 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 11 00:21:39.826427 kubelet[2117]: I0711 00:21:39.826388 2117 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 11 00:21:39.826427 kubelet[2117]: I0711 00:21:39.826410 2117 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:21:39.824000 audit[2117]: AVC avc: denied { mac_admin } for pid=2117 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:21:39.829086 kubelet[2117]: I0711 00:21:39.829050 2117 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:21:39.824000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 11 00:21:39.831112 kernel: audit: type=1400 audit(1752193299.824:224): avc: denied { mac_admin } for pid=2117 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:21:39.831149 kernel: audit: type=1401 audit(1752193299.824:224): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 11 00:21:39.831165 kernel: audit: type=1300 audit(1752193299.824:224): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a76cc0 a1=4000057e18 a2=4000a76c90 a3=25 items=0 ppid=1 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:39.824000 audit[2117]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a76cc0 a1=4000057e18 a2=4000a76c90 a3=25 items=0 ppid=1 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:39.831671 kubelet[2117]: I0711 00:21:39.831638 2117 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:21:39.833128 kubelet[2117]: I0711 00:21:39.833080 2117 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:21:39.833296 kubelet[2117]: I0711 00:21:39.833281 2117 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:21:39.833509 kubelet[2117]: I0711 00:21:39.833490 2117 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:21:39.824000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 11 00:21:39.834779 kubelet[2117]: I0711 00:21:39.834761 2117 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:21:39.835003 kubelet[2117]: E0711 00:21:39.834984 2117 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:21:39.835526 kubelet[2117]: I0711 00:21:39.835505 2117 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:21:39.835638 kubelet[2117]: I0711 00:21:39.835625 2117 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:21:39.838139 kernel: audit: type=1327 audit(1752193299.824:224): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 11 00:21:39.847490 kernel: audit: type=1400 audit(1752193299.824:225): avc: denied { mac_admin } for pid=2117 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:21:39.847581 kernel: audit: type=1401 audit(1752193299.824:225): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 11 00:21:39.848197 kernel: audit: type=1300 audit(1752193299.824:225): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000411520 a1=4000057e30 a2=4000a76d50 a3=25 items=0 ppid=1 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:39.824000 audit[2117]: AVC avc: denied { mac_admin } for pid=2117 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:21:39.824000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 11 00:21:39.824000 audit[2117]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000411520 a1=4000057e30 a2=4000a76d50 a3=25 items=0 ppid=1 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:39.848631 kubelet[2117]: I0711 00:21:39.848608 2117 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:21:39.848631 kubelet[2117]: I0711 00:21:39.848625 2117 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:21:39.824000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 11 00:21:39.850127 kubelet[2117]: I0711 00:21:39.850095 2117 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:21:39.853351 kernel: audit: type=1327 audit(1752193299.824:225): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 11 00:21:39.869209 kubelet[2117]: E0711 00:21:39.869129 2117 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:21:39.877473 kubelet[2117]: I0711 00:21:39.877447 2117 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:21:39.879973 kubelet[2117]: I0711 00:21:39.879948 2117 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:21:39.880091 kubelet[2117]: I0711 00:21:39.880077 2117 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:21:39.880179 kubelet[2117]: I0711 00:21:39.880167 2117 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:21:39.880307 kubelet[2117]: E0711 00:21:39.880277 2117 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:21:39.899502 kubelet[2117]: I0711 00:21:39.899474 2117 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:21:39.899502 kubelet[2117]: I0711 00:21:39.899496 2117 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:21:39.899597 kubelet[2117]: I0711 00:21:39.899516 2117 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:21:39.899676 kubelet[2117]: I0711 00:21:39.899659 2117 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:21:39.899705 kubelet[2117]: I0711 00:21:39.899675 2117 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:21:39.899705 kubelet[2117]: I0711 00:21:39.899692 2117 policy_none.go:49] "None policy: Start" Jul 11 00:21:39.900258 kubelet[2117]: I0711 00:21:39.900243 2117 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:21:39.900363 kubelet[2117]: I0711 00:21:39.900351 2117 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:21:39.900539 kubelet[2117]: I0711 00:21:39.900527 2117 state_mem.go:75] "Updated machine memory state" Jul 11 00:21:39.901717 kubelet[2117]: I0711 00:21:39.901695 2117 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:21:39.899000 audit[2117]: AVC avc: denied { mac_admin } for pid=2117 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:21:39.899000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 11 00:21:39.899000 audit[2117]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400059be90 a1=400030d620 a2=400059be60 a3=25 items=0 ppid=1 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:39.899000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 11 00:21:39.902067 kubelet[2117]: I0711 00:21:39.902047 2117 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 11 00:21:39.902400 kubelet[2117]: I0711 00:21:39.902383 2117 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:21:39.902526 kubelet[2117]: I0711 00:21:39.902493 2117 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:21:39.902731 kubelet[2117]: I0711 00:21:39.902712 2117 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:21:40.006199 kubelet[2117]: I0711 00:21:40.006158 2117 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:21:40.012652 kubelet[2117]: I0711 00:21:40.012617 2117 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 11 00:21:40.012752 kubelet[2117]: I0711 00:21:40.012697 2117 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:21:40.136775 kubelet[2117]: I0711 00:21:40.136659 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:21:40.136775 kubelet[2117]: I0711 00:21:40.136700 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:21:40.136775 kubelet[2117]: I0711 00:21:40.136719 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:21:40.136775 kubelet[2117]: I0711 00:21:40.136735 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c699b0927dce4d6d32bc978e9b69d15b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c699b0927dce4d6d32bc978e9b69d15b\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:21:40.136775 kubelet[2117]: I0711 00:21:40.136753 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c699b0927dce4d6d32bc978e9b69d15b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c699b0927dce4d6d32bc978e9b69d15b\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:21:40.137019 kubelet[2117]: I0711 00:21:40.136790 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:21:40.137019 kubelet[2117]: I0711 00:21:40.136827 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c699b0927dce4d6d32bc978e9b69d15b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c699b0927dce4d6d32bc978e9b69d15b\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:21:40.137019 kubelet[2117]: I0711 00:21:40.136847 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:21:40.137019 kubelet[2117]: I0711 00:21:40.136867 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:21:40.291174 kubelet[2117]: E0711 00:21:40.291127 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:40.291478 kubelet[2117]: E0711 00:21:40.291458 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:40.291575 kubelet[2117]: E0711 00:21:40.291462 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:40.819488 kubelet[2117]: I0711 00:21:40.819442 2117 apiserver.go:52] "Watching apiserver" Jul 11 00:21:40.836550 kubelet[2117]: I0711 00:21:40.836526 2117 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:21:40.890585 kubelet[2117]: E0711 00:21:40.890542 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:40.892088 kubelet[2117]: E0711 00:21:40.891249 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:40.897001 kubelet[2117]: E0711 00:21:40.896931 2117 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:21:40.897162 kubelet[2117]: E0711 00:21:40.897116 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:40.912068 kubelet[2117]: I0711 00:21:40.911998 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9119825719999999 podStartE2EDuration="1.911982572s" podCreationTimestamp="2025-07-11 00:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:21:40.910505689 +0000 UTC m=+1.140962854" watchObservedRunningTime="2025-07-11 00:21:40.911982572 +0000 UTC m=+1.142439697" Jul 11 00:21:40.925503 kubelet[2117]: I0711 00:21:40.925445 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.925427455 podStartE2EDuration="1.925427455s" podCreationTimestamp="2025-07-11 00:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:21:40.918768244 +0000 UTC m=+1.149225369" watchObservedRunningTime="2025-07-11 00:21:40.925427455 +0000 UTC m=+1.155884580" Jul 11 00:21:40.938726 kubelet[2117]: I0711 00:21:40.938672 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.9386573729999999 podStartE2EDuration="1.938657373s" podCreationTimestamp="2025-07-11 00:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:21:40.925653419 +0000 UTC m=+1.156110544" watchObservedRunningTime="2025-07-11 00:21:40.938657373 +0000 UTC m=+1.169114498" Jul 11 00:21:41.891330 kubelet[2117]: E0711 00:21:41.891299 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:42.670431 kubelet[2117]: E0711 00:21:42.670401 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:42.907999 kubelet[2117]: E0711 00:21:42.907969 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:46.138107 kubelet[2117]: I0711 00:21:46.138074 2117 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:21:46.138756 env[1316]: time="2025-07-11T00:21:46.138657006Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:21:46.139009 kubelet[2117]: I0711 00:21:46.138816 2117 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:21:47.131734 kubelet[2117]: W0711 00:21:47.131699 2117 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 11 00:21:47.131866 kubelet[2117]: E0711 00:21:47.131805 2117 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 11 00:21:47.132968 kubelet[2117]: W0711 00:21:47.132945 2117 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 11 00:21:47.133067 kubelet[2117]: E0711 00:21:47.132974 2117 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 11 00:21:47.191088 kubelet[2117]: I0711 00:21:47.191043 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0-kube-proxy\") pod \"kube-proxy-zffmg\" (UID: \"b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0\") " pod="kube-system/kube-proxy-zffmg" Jul 11 00:21:47.191088 kubelet[2117]: I0711 00:21:47.191089 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0-xtables-lock\") pod \"kube-proxy-zffmg\" (UID: \"b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0\") " pod="kube-system/kube-proxy-zffmg" Jul 11 00:21:47.191433 kubelet[2117]: I0711 00:21:47.191113 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0-lib-modules\") pod \"kube-proxy-zffmg\" (UID: \"b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0\") " pod="kube-system/kube-proxy-zffmg" Jul 11 00:21:47.191433 kubelet[2117]: I0711 00:21:47.191131 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb4kq\" (UniqueName: \"kubernetes.io/projected/b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0-kube-api-access-rb4kq\") pod \"kube-proxy-zffmg\" (UID: \"b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0\") " pod="kube-system/kube-proxy-zffmg" Jul 11 00:21:47.393052 kubelet[2117]: I0711 00:21:47.392931 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/67612bd0-0d5f-40c7-8401-354a53876165-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-xq884\" (UID: \"67612bd0-0d5f-40c7-8401-354a53876165\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-xq884" Jul 11 00:21:47.393052 kubelet[2117]: I0711 00:21:47.392994 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg6vs\" (UniqueName: \"kubernetes.io/projected/67612bd0-0d5f-40c7-8401-354a53876165-kube-api-access-cg6vs\") pod \"tigera-operator-5bf8dfcb4-xq884\" (UID: \"67612bd0-0d5f-40c7-8401-354a53876165\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-xq884" Jul 11 00:21:47.500998 kubelet[2117]: I0711 00:21:47.500961 2117 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 11 00:21:47.595572 env[1316]: time="2025-07-11T00:21:47.595518874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-xq884,Uid:67612bd0-0d5f-40c7-8401-354a53876165,Namespace:tigera-operator,Attempt:0,}" Jul 11 00:21:47.611161 env[1316]: time="2025-07-11T00:21:47.611089404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:21:47.611161 env[1316]: time="2025-07-11T00:21:47.611126920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:21:47.611350 env[1316]: time="2025-07-11T00:21:47.611137559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:21:47.611564 env[1316]: time="2025-07-11T00:21:47.611524280Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1896a3ccf45e2b7c1d7fab6287a1c1b50cbad4ce53dde9accc63fc4694dccf61 pid=2177 runtime=io.containerd.runc.v2 Jul 11 00:21:47.662025 env[1316]: time="2025-07-11T00:21:47.661913535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-xq884,Uid:67612bd0-0d5f-40c7-8401-354a53876165,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1896a3ccf45e2b7c1d7fab6287a1c1b50cbad4ce53dde9accc63fc4694dccf61\"" Jul 11 00:21:47.664297 env[1316]: time="2025-07-11T00:21:47.664007761Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 11 00:21:48.292965 kubelet[2117]: E0711 00:21:48.292631 2117 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jul 11 00:21:48.292965 kubelet[2117]: E0711 00:21:48.292700 2117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0-kube-proxy podName:b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0 nodeName:}" failed. No retries permitted until 2025-07-11 00:21:48.792680237 +0000 UTC m=+9.023137362 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0-kube-proxy") pod "kube-proxy-zffmg" (UID: "b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0") : failed to sync configmap cache: timed out waiting for the condition Jul 11 00:21:48.300365 kubelet[2117]: E0711 00:21:48.300334 2117 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 11 00:21:48.300365 kubelet[2117]: E0711 00:21:48.300361 2117 projected.go:194] Error preparing data for projected volume kube-api-access-rb4kq for pod kube-system/kube-proxy-zffmg: failed to sync configmap cache: timed out waiting for the condition Jul 11 00:21:48.300483 kubelet[2117]: E0711 00:21:48.300410 2117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0-kube-api-access-rb4kq podName:b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0 nodeName:}" failed. No retries permitted until 2025-07-11 00:21:48.800396538 +0000 UTC m=+9.030853663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rb4kq" (UniqueName: "kubernetes.io/projected/b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0-kube-api-access-rb4kq") pod "kube-proxy-zffmg" (UID: "b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0") : failed to sync configmap cache: timed out waiting for the condition Jul 11 00:21:48.933480 kubelet[2117]: E0711 00:21:48.933271 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:48.935446 env[1316]: time="2025-07-11T00:21:48.935403880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zffmg,Uid:b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0,Namespace:kube-system,Attempt:0,}" Jul 11 00:21:48.959005 env[1316]: time="2025-07-11T00:21:48.958943587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:21:48.959130 env[1316]: time="2025-07-11T00:21:48.959023300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:21:48.959130 env[1316]: time="2025-07-11T00:21:48.959049137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:21:48.960489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3116294598.mount: Deactivated successfully. Jul 11 00:21:48.961239 env[1316]: time="2025-07-11T00:21:48.960938316Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7f57f5ed8b52701f107efb711f0c2e07a9a2a469f5282c5519e6fb830c24166 pid=2219 runtime=io.containerd.runc.v2 Jul 11 00:21:49.014864 env[1316]: time="2025-07-11T00:21:49.014804881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zffmg,Uid:b53a7ee3-e8e9-46ec-8f37-f0ac8ffb94e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7f57f5ed8b52701f107efb711f0c2e07a9a2a469f5282c5519e6fb830c24166\"" Jul 11 00:21:49.015447 kubelet[2117]: E0711 00:21:49.015423 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:49.017435 env[1316]: time="2025-07-11T00:21:49.017395928Z" level=info msg="CreateContainer within sandbox \"f7f57f5ed8b52701f107efb711f0c2e07a9a2a469f5282c5519e6fb830c24166\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:21:49.031325 env[1316]: time="2025-07-11T00:21:49.031272963Z" level=info msg="CreateContainer within sandbox \"f7f57f5ed8b52701f107efb711f0c2e07a9a2a469f5282c5519e6fb830c24166\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ad766d72386e9343980c7b11040fee02b8f80bd0e25a86200ab7fcbb8f919255\"" Jul 11 00:21:49.034457 env[1316]: time="2025-07-11T00:21:49.034410561Z" level=info msg="StartContainer for \"ad766d72386e9343980c7b11040fee02b8f80bd0e25a86200ab7fcbb8f919255\"" Jul 11 00:21:49.099434 env[1316]: time="2025-07-11T00:21:49.099381212Z" level=info msg="StartContainer for \"ad766d72386e9343980c7b11040fee02b8f80bd0e25a86200ab7fcbb8f919255\" returns successfully" Jul 11 00:21:49.309000 audit[2321]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.312041 kernel: kauditd_printk_skb: 4 callbacks suppressed Jul 11 00:21:49.312135 kernel: audit: type=1325 audit(1752193309.309:227): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.309000 audit[2321]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee8dd290 a2=0 a3=1 items=0 ppid=2270 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.316379 kernel: audit: type=1300 audit(1752193309.309:227): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee8dd290 a2=0 a3=1 items=0 ppid=2270 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.309000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 11 00:21:49.318229 kernel: audit: type=1327 audit(1752193309.309:227): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 11 00:21:49.309000 audit[2322]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2322 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.320101 kernel: audit: type=1325 audit(1752193309.309:228): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2322 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.309000 audit[2322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc2e66d0 a2=0 a3=1 items=0 ppid=2270 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.326051 kernel: audit: type=1300 audit(1752193309.309:228): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc2e66d0 a2=0 a3=1 items=0 ppid=2270 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.309000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 11 00:21:49.327963 kernel: audit: type=1327 audit(1752193309.309:228): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 11 00:21:49.310000 audit[2323]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.329763 kernel: audit: type=1325 audit(1752193309.310:229): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.310000 audit[2323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1564d50 a2=0 a3=1 items=0 ppid=2270 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.333333 kernel: audit: type=1300 audit(1752193309.310:229): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1564d50 a2=0 a3=1 items=0 ppid=2270 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.333591 kernel: audit: type=1327 audit(1752193309.310:229): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 11 00:21:49.310000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 11 00:21:49.311000 audit[2324]: NETFILTER_CFG table=nat:41 family=10 entries=1 op=nft_register_chain pid=2324 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.336922 kernel: audit: type=1325 audit(1752193309.311:230): table=nat:41 family=10 entries=1 op=nft_register_chain pid=2324 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.311000 audit[2324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea223e80 a2=0 a3=1 items=0 ppid=2270 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.311000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 11 00:21:49.312000 audit[2325]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2325 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.312000 audit[2325]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff2c86120 a2=0 a3=1 items=0 ppid=2270 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.312000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 11 00:21:49.312000 audit[2326]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2326 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.312000 audit[2326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffde2de2a0 a2=0 a3=1 items=0 ppid=2270 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.312000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 11 00:21:49.419000 audit[2327]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.419000 audit[2327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff66b1ed0 a2=0 a3=1 items=0 ppid=2270 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.419000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 11 00:21:49.423000 audit[2329]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.423000 audit[2329]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc0d4db30 a2=0 a3=1 items=0 ppid=2270 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.423000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 11 00:21:49.428000 audit[2332]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.428000 audit[2332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc2c421d0 a2=0 a3=1 items=0 ppid=2270 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.428000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 11 00:21:49.429000 audit[2333]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2333 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.429000 audit[2333]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1615b00 a2=0 a3=1 items=0 ppid=2270 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.429000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 11 00:21:49.431000 audit[2335]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.431000 audit[2335]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc9c1af70 a2=0 a3=1 items=0 ppid=2270 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.431000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 11 00:21:49.432000 audit[2336]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.432000 audit[2336]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd73c8b0 a2=0 a3=1 items=0 ppid=2270 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.432000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 11 00:21:49.434000 audit[2338]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2338 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.434000 audit[2338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff767a1d0 a2=0 a3=1 items=0 ppid=2270 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.434000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 11 00:21:49.437000 audit[2341]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.437000 audit[2341]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffed64190 a2=0 a3=1 items=0 ppid=2270 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.437000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 11 00:21:49.438000 audit[2342]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.438000 audit[2342]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9a2e140 a2=0 a3=1 items=0 ppid=2270 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.438000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 11 00:21:49.441000 audit[2344]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2344 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.441000 audit[2344]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff57b4090 a2=0 a3=1 items=0 ppid=2270 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.441000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 11 00:21:49.442000 audit[2345]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2345 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.442000 audit[2345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc9af6640 a2=0 a3=1 items=0 ppid=2270 pid=2345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.442000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 11 00:21:49.444000 audit[2347]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.444000 audit[2347]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffbbe0bd0 a2=0 a3=1 items=0 ppid=2270 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.444000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 11 00:21:49.447000 audit[2350]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.447000 audit[2350]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc84abd80 a2=0 a3=1 items=0 ppid=2270 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.447000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 11 00:21:49.451000 audit[2353]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2353 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.451000 audit[2353]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff2d72ce0 a2=0 a3=1 items=0 ppid=2270 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.451000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 11 00:21:49.451000 audit[2354]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2354 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.451000 audit[2354]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdf684b00 a2=0 a3=1 items=0 ppid=2270 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.451000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 11 00:21:49.454000 audit[2356]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2356 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.454000 audit[2356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffffda41010 a2=0 a3=1 items=0 ppid=2270 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.454000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 11 00:21:49.457000 audit[2359]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2359 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.457000 audit[2359]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff4e37830 a2=0 a3=1 items=0 ppid=2270 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.457000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 11 00:21:49.457000 audit[2360]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.457000 audit[2360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeda193b0 a2=0 a3=1 items=0 ppid=2270 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.457000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 11 00:21:49.460000 audit[2362]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2362 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 11 00:21:49.460000 audit[2362]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff3880740 a2=0 a3=1 items=0 ppid=2270 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.460000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 11 00:21:49.487000 audit[2368]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:21:49.487000 audit[2368]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe63053b0 a2=0 a3=1 items=0 ppid=2270 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.487000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:21:49.496000 audit[2368]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:21:49.496000 audit[2368]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffe63053b0 a2=0 a3=1 items=0 ppid=2270 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.496000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:21:49.497000 audit[2373]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.497000 audit[2373]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd9cacdf0 a2=0 a3=1 items=0 ppid=2270 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.497000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 11 00:21:49.500000 audit[2375]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2375 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.500000 audit[2375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffcdd6b450 a2=0 a3=1 items=0 ppid=2270 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.500000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 11 00:21:49.503000 audit[2378]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.503000 audit[2378]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffdafbcb90 a2=0 a3=1 items=0 ppid=2270 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.503000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 11 00:21:49.504000 audit[2379]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.504000 audit[2379]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4388ad0 a2=0 a3=1 items=0 ppid=2270 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.504000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 11 00:21:49.506000 audit[2381]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.506000 audit[2381]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd6b95ba0 a2=0 a3=1 items=0 ppid=2270 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.506000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 11 00:21:49.507000 audit[2382]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2382 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.507000 audit[2382]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe608f1c0 a2=0 a3=1 items=0 ppid=2270 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.507000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 11 00:21:49.510000 audit[2384]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.510000 audit[2384]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe01c3620 a2=0 a3=1 items=0 ppid=2270 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.510000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 11 00:21:49.512000 audit[2387]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.512000 audit[2387]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe01474c0 a2=0 a3=1 items=0 ppid=2270 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.512000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 11 00:21:49.513000 audit[2388]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.513000 audit[2388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc943400 a2=0 a3=1 items=0 ppid=2270 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.513000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 11 00:21:49.515000 audit[2390]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.515000 audit[2390]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd8a12370 a2=0 a3=1 items=0 ppid=2270 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.515000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 11 00:21:49.517723 kubelet[2117]: E0711 00:21:49.517670 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:49.517000 audit[2391]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.517000 audit[2391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcc6751b0 a2=0 a3=1 items=0 ppid=2270 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.517000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 11 00:21:49.519000 audit[2393]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.519000 audit[2393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff6d53e90 a2=0 a3=1 items=0 ppid=2270 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.519000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 11 00:21:49.524000 audit[2396]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.524000 audit[2396]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc0fb0b80 a2=0 a3=1 items=0 ppid=2270 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.524000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 11 00:21:49.528000 audit[2399]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.528000 audit[2399]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd222cc00 a2=0 a3=1 items=0 ppid=2270 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.528000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 11 00:21:49.529000 audit[2400]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2400 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.529000 audit[2400]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffa4e0770 a2=0 a3=1 items=0 ppid=2270 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.529000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 11 00:21:49.531000 audit[2402]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2402 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.531000 audit[2402]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffec411e90 a2=0 a3=1 items=0 ppid=2270 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.531000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 11 00:21:49.542000 audit[2405]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.542000 audit[2405]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffcb53da80 a2=0 a3=1 items=0 ppid=2270 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.542000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 11 00:21:49.543000 audit[2406]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.543000 audit[2406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe027afb0 a2=0 a3=1 items=0 ppid=2270 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.543000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 11 00:21:49.545000 audit[2408]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2408 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.545000 audit[2408]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffcbeb2e60 a2=0 a3=1 items=0 ppid=2270 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.545000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 11 00:21:49.546000 audit[2409]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2409 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.546000 audit[2409]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffec210c40 a2=0 a3=1 items=0 ppid=2270 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.546000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 11 00:21:49.548000 audit[2411]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.548000 audit[2411]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc5431a00 a2=0 a3=1 items=0 ppid=2270 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.548000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 11 00:21:49.551000 audit[2414]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2414 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 11 00:21:49.551000 audit[2414]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffda9d0e0 a2=0 a3=1 items=0 ppid=2270 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.551000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 11 00:21:49.554000 audit[2416]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2416 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 11 00:21:49.554000 audit[2416]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffefe55c10 a2=0 a3=1 items=0 ppid=2270 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.554000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:21:49.554000 audit[2416]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 11 00:21:49.554000 audit[2416]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffefe55c10 a2=0 a3=1 items=0 ppid=2270 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:49.554000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:21:49.562995 env[1316]: time="2025-07-11T00:21:49.562878062Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:49.566240 env[1316]: time="2025-07-11T00:21:49.566194844Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:49.569287 env[1316]: time="2025-07-11T00:21:49.569244651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:49.571902 env[1316]: time="2025-07-11T00:21:49.571860376Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:21:49.572267 env[1316]: time="2025-07-11T00:21:49.572228303Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 11 00:21:49.576295 env[1316]: time="2025-07-11T00:21:49.576248382Z" level=info msg="CreateContainer within sandbox \"1896a3ccf45e2b7c1d7fab6287a1c1b50cbad4ce53dde9accc63fc4694dccf61\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 11 00:21:49.585939 env[1316]: time="2025-07-11T00:21:49.585870679Z" level=info msg="CreateContainer within sandbox \"1896a3ccf45e2b7c1d7fab6287a1c1b50cbad4ce53dde9accc63fc4694dccf61\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"feb1c6bb2c47bba4b25ff4acfa063bf31d64e21b04048f29265b3fb44d73d970\"" Jul 11 00:21:49.586450 env[1316]: time="2025-07-11T00:21:49.586333077Z" level=info msg="StartContainer for \"feb1c6bb2c47bba4b25ff4acfa063bf31d64e21b04048f29265b3fb44d73d970\"" Jul 11 00:21:49.639385 env[1316]: time="2025-07-11T00:21:49.639342401Z" level=info msg="StartContainer for \"feb1c6bb2c47bba4b25ff4acfa063bf31d64e21b04048f29265b3fb44d73d970\" returns successfully" Jul 11 00:21:49.910954 kubelet[2117]: E0711 00:21:49.909621 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:49.910954 kubelet[2117]: E0711 00:21:49.909813 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:49.933700 kubelet[2117]: I0711 00:21:49.933250 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-xq884" podStartSLOduration=1.023161539 podStartE2EDuration="2.93323131s" podCreationTimestamp="2025-07-11 00:21:47 +0000 UTC" firstStartedPulling="2025-07-11 00:21:47.663605762 +0000 UTC m=+7.894062887" lastFinishedPulling="2025-07-11 00:21:49.573675533 +0000 UTC m=+9.804132658" observedRunningTime="2025-07-11 00:21:49.919965701 +0000 UTC m=+10.150422826" watchObservedRunningTime="2025-07-11 00:21:49.93323131 +0000 UTC m=+10.163688435" Jul 11 00:21:52.679193 kubelet[2117]: E0711 00:21:52.679153 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:52.689773 kubelet[2117]: I0711 00:21:52.689721 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zffmg" podStartSLOduration=5.6896565169999995 podStartE2EDuration="5.689656517s" podCreationTimestamp="2025-07-11 00:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:21:49.942138791 +0000 UTC m=+10.172595916" watchObservedRunningTime="2025-07-11 00:21:52.689656517 +0000 UTC m=+12.920113642" Jul 11 00:21:52.915759 kubelet[2117]: E0711 00:21:52.915721 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:55.034447 sudo[1484]: pam_unix(sudo:session): session closed for user root Jul 11 00:21:55.035000 audit[1484]: USER_END pid=1484 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 11 00:21:55.038441 kernel: kauditd_printk_skb: 143 callbacks suppressed Jul 11 00:21:55.038509 kernel: audit: type=1106 audit(1752193315.035:278): pid=1484 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 11 00:21:55.035000 audit[1484]: CRED_DISP pid=1484 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 11 00:21:55.042829 kernel: audit: type=1104 audit(1752193315.035:279): pid=1484 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 11 00:21:55.054570 sshd[1478]: pam_unix(sshd:session): session closed for user core Jul 11 00:21:55.058000 audit[1478]: USER_END pid=1478 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:21:55.063544 systemd[1]: sshd@6-10.0.0.33:22-10.0.0.1:36750.service: Deactivated successfully. Jul 11 00:21:55.058000 audit[1478]: CRED_DISP pid=1478 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:21:55.067058 kernel: audit: type=1106 audit(1752193315.058:280): pid=1478 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:21:55.067136 kernel: audit: type=1104 audit(1752193315.058:281): pid=1478 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:21:55.067255 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:21:55.067288 systemd-logind[1302]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:21:55.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.33:22-10.0.0.1:36750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:55.070276 kernel: audit: type=1131 audit(1752193315.062:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.33:22-10.0.0.1:36750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:21:55.070669 systemd-logind[1302]: Removed session 7. Jul 11 00:21:55.979812 update_engine[1307]: I0711 00:21:55.979755 1307 update_attempter.cc:509] Updating boot flags... Jul 11 00:21:56.188931 kernel: audit: type=1325 audit(1752193316.183:283): table=filter:89 family=2 entries=14 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:21:56.189069 kernel: audit: type=1300 audit(1752193316.183:283): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd50404b0 a2=0 a3=1 items=0 ppid=2270 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:56.183000 audit[2523]: NETFILTER_CFG table=filter:89 family=2 entries=14 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:21:56.183000 audit[2523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd50404b0 a2=0 a3=1 items=0 ppid=2270 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:56.193935 kernel: audit: type=1327 audit(1752193316.183:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:21:56.183000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:21:56.193000 audit[2523]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:21:56.193000 audit[2523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd50404b0 a2=0 a3=1 items=0 ppid=2270 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:56.200535 kernel: audit: type=1325 audit(1752193316.193:284): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:21:56.200616 kernel: audit: type=1300 audit(1752193316.193:284): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd50404b0 a2=0 a3=1 items=0 ppid=2270 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:56.193000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:21:56.216000 audit[2525]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:21:56.216000 audit[2525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe9f733f0 a2=0 a3=1 items=0 ppid=2270 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:56.216000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:21:56.224000 audit[2525]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:21:56.224000 audit[2525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe9f733f0 a2=0 a3=1 items=0 ppid=2270 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:21:56.224000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:00.155361 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 11 00:22:00.155508 kernel: audit: type=1325 audit(1752193320.151:287): table=filter:93 family=2 entries=17 op=nft_register_rule pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:00.151000 audit[2527]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:00.159873 kernel: audit: type=1300 audit(1752193320.151:287): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffd344c30 a2=0 a3=1 items=0 ppid=2270 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:00.151000 audit[2527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffd344c30 a2=0 a3=1 items=0 ppid=2270 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:00.151000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:00.163806 kernel: audit: type=1327 audit(1752193320.151:287): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:00.163000 audit[2527]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:00.166645 kernel: audit: type=1325 audit(1752193320.163:288): table=nat:94 family=2 entries=12 op=nft_register_rule pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:00.163000 audit[2527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffd344c30 a2=0 a3=1 items=0 ppid=2270 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:00.163000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:00.172945 kernel: audit: type=1300 audit(1752193320.163:288): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffd344c30 a2=0 a3=1 items=0 ppid=2270 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:00.173022 kernel: audit: type=1327 audit(1752193320.163:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:00.204000 audit[2529]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:00.207917 kernel: audit: type=1325 audit(1752193320.204:289): table=filter:95 family=2 entries=18 op=nft_register_rule pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:00.204000 audit[2529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd8b4fa60 a2=0 a3=1 items=0 ppid=2270 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:00.204000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:00.215112 kernel: audit: type=1300 audit(1752193320.204:289): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd8b4fa60 a2=0 a3=1 items=0 ppid=2270 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:00.215185 kernel: audit: type=1327 audit(1752193320.204:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:00.214000 audit[2529]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:00.217713 kernel: audit: type=1325 audit(1752193320.214:290): table=nat:96 family=2 entries=12 op=nft_register_rule pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:00.214000 audit[2529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd8b4fa60 a2=0 a3=1 items=0 ppid=2270 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:00.214000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:00.293340 kubelet[2117]: I0711 00:22:00.293300 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c00ee40f-17eb-4c5b-b7f3-4b2ae8d40f97-typha-certs\") pod \"calico-typha-5448f49787-m9jch\" (UID: \"c00ee40f-17eb-4c5b-b7f3-4b2ae8d40f97\") " pod="calico-system/calico-typha-5448f49787-m9jch" Jul 11 00:22:00.293767 kubelet[2117]: I0711 00:22:00.293744 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c00ee40f-17eb-4c5b-b7f3-4b2ae8d40f97-tigera-ca-bundle\") pod \"calico-typha-5448f49787-m9jch\" (UID: \"c00ee40f-17eb-4c5b-b7f3-4b2ae8d40f97\") " pod="calico-system/calico-typha-5448f49787-m9jch" Jul 11 00:22:00.293850 kubelet[2117]: I0711 00:22:00.293837 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh8rh\" (UniqueName: \"kubernetes.io/projected/c00ee40f-17eb-4c5b-b7f3-4b2ae8d40f97-kube-api-access-sh8rh\") pod \"calico-typha-5448f49787-m9jch\" (UID: \"c00ee40f-17eb-4c5b-b7f3-4b2ae8d40f97\") " pod="calico-system/calico-typha-5448f49787-m9jch" Jul 11 00:22:00.490956 kubelet[2117]: E0711 00:22:00.490915 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:00.492838 env[1316]: time="2025-07-11T00:22:00.491566486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5448f49787-m9jch,Uid:c00ee40f-17eb-4c5b-b7f3-4b2ae8d40f97,Namespace:calico-system,Attempt:0,}" Jul 11 00:22:00.507550 env[1316]: time="2025-07-11T00:22:00.507383429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:00.507550 env[1316]: time="2025-07-11T00:22:00.507421067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:00.507550 env[1316]: time="2025-07-11T00:22:00.507431267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:00.507791 env[1316]: time="2025-07-11T00:22:00.507618738Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/968ea40b232fc33a3b9533e75bcf36b14a9138466a18427c10cb04b3e06fe196 pid=2539 runtime=io.containerd.runc.v2 Jul 11 00:22:00.578750 env[1316]: time="2025-07-11T00:22:00.578711122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5448f49787-m9jch,Uid:c00ee40f-17eb-4c5b-b7f3-4b2ae8d40f97,Namespace:calico-system,Attempt:0,} returns sandbox id \"968ea40b232fc33a3b9533e75bcf36b14a9138466a18427c10cb04b3e06fe196\"" Jul 11 00:22:00.580738 kubelet[2117]: E0711 00:22:00.580694 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:00.583774 env[1316]: time="2025-07-11T00:22:00.582697706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 11 00:22:00.595260 kubelet[2117]: I0711 00:22:00.595223 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2500f257-e300-492f-af8f-78bb0c2f94a3-lib-modules\") pod \"calico-node-vscl2\" (UID: \"2500f257-e300-492f-af8f-78bb0c2f94a3\") " pod="calico-system/calico-node-vscl2" Jul 11 00:22:00.595370 kubelet[2117]: I0711 00:22:00.595268 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2500f257-e300-492f-af8f-78bb0c2f94a3-policysync\") pod \"calico-node-vscl2\" (UID: \"2500f257-e300-492f-af8f-78bb0c2f94a3\") " pod="calico-system/calico-node-vscl2" Jul 11 00:22:00.595370 kubelet[2117]: I0711 00:22:00.595286 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2500f257-e300-492f-af8f-78bb0c2f94a3-var-lib-calico\") pod \"calico-node-vscl2\" (UID: \"2500f257-e300-492f-af8f-78bb0c2f94a3\") " pod="calico-system/calico-node-vscl2" Jul 11 00:22:00.595370 kubelet[2117]: I0711 00:22:00.595329 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2500f257-e300-492f-af8f-78bb0c2f94a3-tigera-ca-bundle\") pod \"calico-node-vscl2\" (UID: \"2500f257-e300-492f-af8f-78bb0c2f94a3\") " pod="calico-system/calico-node-vscl2" Jul 11 00:22:00.595370 kubelet[2117]: I0711 00:22:00.595362 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2500f257-e300-492f-af8f-78bb0c2f94a3-var-run-calico\") pod \"calico-node-vscl2\" (UID: \"2500f257-e300-492f-af8f-78bb0c2f94a3\") " pod="calico-system/calico-node-vscl2" Jul 11 00:22:00.595477 kubelet[2117]: I0711 00:22:00.595383 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2500f257-e300-492f-af8f-78bb0c2f94a3-cni-bin-dir\") pod \"calico-node-vscl2\" (UID: \"2500f257-e300-492f-af8f-78bb0c2f94a3\") " pod="calico-system/calico-node-vscl2" Jul 11 00:22:00.595477 kubelet[2117]: I0711 00:22:00.595407 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2500f257-e300-492f-af8f-78bb0c2f94a3-flexvol-driver-host\") pod \"calico-node-vscl2\" (UID: \"2500f257-e300-492f-af8f-78bb0c2f94a3\") " pod="calico-system/calico-node-vscl2" Jul 11 00:22:00.595477 kubelet[2117]: I0711 00:22:00.595428 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2500f257-e300-492f-af8f-78bb0c2f94a3-node-certs\") pod \"calico-node-vscl2\" (UID: \"2500f257-e300-492f-af8f-78bb0c2f94a3\") " pod="calico-system/calico-node-vscl2" Jul 11 00:22:00.595477 kubelet[2117]: I0711 00:22:00.595451 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2500f257-e300-492f-af8f-78bb0c2f94a3-cni-log-dir\") pod \"calico-node-vscl2\" (UID: \"2500f257-e300-492f-af8f-78bb0c2f94a3\") " pod="calico-system/calico-node-vscl2" Jul 11 00:22:00.595477 kubelet[2117]: I0711 00:22:00.595473 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd7s4\" (UniqueName: \"kubernetes.io/projected/2500f257-e300-492f-af8f-78bb0c2f94a3-kube-api-access-qd7s4\") pod \"calico-node-vscl2\" (UID: \"2500f257-e300-492f-af8f-78bb0c2f94a3\") " pod="calico-system/calico-node-vscl2" Jul 11 00:22:00.595590 kubelet[2117]: I0711 00:22:00.595493 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2500f257-e300-492f-af8f-78bb0c2f94a3-cni-net-dir\") pod \"calico-node-vscl2\" (UID: \"2500f257-e300-492f-af8f-78bb0c2f94a3\") " pod="calico-system/calico-node-vscl2" Jul 11 00:22:00.595590 kubelet[2117]: I0711 00:22:00.595523 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2500f257-e300-492f-af8f-78bb0c2f94a3-xtables-lock\") pod \"calico-node-vscl2\" (UID: \"2500f257-e300-492f-af8f-78bb0c2f94a3\") " pod="calico-system/calico-node-vscl2" Jul 11 00:22:00.674488 kubelet[2117]: E0711 00:22:00.674423 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q27mn" podUID="4568eb55-c992-4e0f-86d7-395721225945" Jul 11 00:22:00.697051 kubelet[2117]: E0711 00:22:00.697015 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.697051 kubelet[2117]: W0711 00:22:00.697037 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.697051 kubelet[2117]: E0711 00:22:00.697067 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.697365 kubelet[2117]: E0711 00:22:00.697329 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.697365 kubelet[2117]: W0711 00:22:00.697358 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.697443 kubelet[2117]: E0711 00:22:00.697372 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.699571 kubelet[2117]: E0711 00:22:00.699537 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.699571 kubelet[2117]: W0711 00:22:00.699554 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.699692 kubelet[2117]: E0711 00:22:00.699578 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.713524 kubelet[2117]: E0711 00:22:00.713490 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.713524 kubelet[2117]: W0711 00:22:00.713510 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.713684 kubelet[2117]: E0711 00:22:00.713535 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.734255 env[1316]: time="2025-07-11T00:22:00.734212862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vscl2,Uid:2500f257-e300-492f-af8f-78bb0c2f94a3,Namespace:calico-system,Attempt:0,}" Jul 11 00:22:00.749002 env[1316]: time="2025-07-11T00:22:00.748820897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:00.749002 env[1316]: time="2025-07-11T00:22:00.748916093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:00.749002 env[1316]: time="2025-07-11T00:22:00.748943412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:00.750311 env[1316]: time="2025-07-11T00:22:00.749178801Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/60c72a0dce583b071d2a595c4ee9074776a7dadccae64127d047d2f1fe435421 pid=2589 runtime=io.containerd.runc.v2 Jul 11 00:22:00.796454 kubelet[2117]: E0711 00:22:00.796417 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.796454 kubelet[2117]: W0711 00:22:00.796448 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.796691 kubelet[2117]: E0711 00:22:00.796469 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.796691 kubelet[2117]: I0711 00:22:00.796499 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4568eb55-c992-4e0f-86d7-395721225945-kubelet-dir\") pod \"csi-node-driver-q27mn\" (UID: \"4568eb55-c992-4e0f-86d7-395721225945\") " pod="calico-system/csi-node-driver-q27mn" Jul 11 00:22:00.796691 kubelet[2117]: E0711 00:22:00.796662 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.796691 kubelet[2117]: W0711 00:22:00.796679 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.796691 kubelet[2117]: E0711 00:22:00.796688 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.796813 kubelet[2117]: I0711 00:22:00.796702 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4568eb55-c992-4e0f-86d7-395721225945-socket-dir\") pod \"csi-node-driver-q27mn\" (UID: \"4568eb55-c992-4e0f-86d7-395721225945\") " pod="calico-system/csi-node-driver-q27mn" Jul 11 00:22:00.796975 kubelet[2117]: E0711 00:22:00.796833 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.796975 kubelet[2117]: W0711 00:22:00.796854 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.796975 kubelet[2117]: E0711 00:22:00.796863 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.796975 kubelet[2117]: I0711 00:22:00.796878 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7st92\" (UniqueName: \"kubernetes.io/projected/4568eb55-c992-4e0f-86d7-395721225945-kube-api-access-7st92\") pod \"csi-node-driver-q27mn\" (UID: \"4568eb55-c992-4e0f-86d7-395721225945\") " pod="calico-system/csi-node-driver-q27mn" Jul 11 00:22:00.797292 kubelet[2117]: E0711 00:22:00.797182 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.797292 kubelet[2117]: W0711 00:22:00.797199 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.797292 kubelet[2117]: E0711 00:22:00.797221 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.797568 kubelet[2117]: E0711 00:22:00.797455 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.797568 kubelet[2117]: W0711 00:22:00.797468 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.797568 kubelet[2117]: E0711 00:22:00.797487 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.797825 kubelet[2117]: E0711 00:22:00.797727 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.797825 kubelet[2117]: W0711 00:22:00.797739 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.797825 kubelet[2117]: E0711 00:22:00.797755 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.798116 kubelet[2117]: E0711 00:22:00.797992 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.798116 kubelet[2117]: W0711 00:22:00.798004 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.798116 kubelet[2117]: E0711 00:22:00.798022 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.798381 kubelet[2117]: E0711 00:22:00.798282 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.798381 kubelet[2117]: W0711 00:22:00.798294 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.798381 kubelet[2117]: E0711 00:22:00.798312 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.798381 kubelet[2117]: I0711 00:22:00.798331 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4568eb55-c992-4e0f-86d7-395721225945-varrun\") pod \"csi-node-driver-q27mn\" (UID: \"4568eb55-c992-4e0f-86d7-395721225945\") " pod="calico-system/csi-node-driver-q27mn" Jul 11 00:22:00.799959 kubelet[2117]: E0711 00:22:00.798535 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.799959 kubelet[2117]: W0711 00:22:00.798554 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.799959 kubelet[2117]: E0711 00:22:00.798576 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.799959 kubelet[2117]: E0711 00:22:00.798714 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.799959 kubelet[2117]: W0711 00:22:00.798723 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.799959 kubelet[2117]: E0711 00:22:00.798732 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.799959 kubelet[2117]: E0711 00:22:00.798860 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.799959 kubelet[2117]: W0711 00:22:00.798872 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.799959 kubelet[2117]: E0711 00:22:00.798887 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.800327 kubelet[2117]: I0711 00:22:00.798903 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4568eb55-c992-4e0f-86d7-395721225945-registration-dir\") pod \"csi-node-driver-q27mn\" (UID: \"4568eb55-c992-4e0f-86d7-395721225945\") " pod="calico-system/csi-node-driver-q27mn" Jul 11 00:22:00.800327 kubelet[2117]: E0711 00:22:00.799092 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.800327 kubelet[2117]: W0711 00:22:00.799102 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.800327 kubelet[2117]: E0711 00:22:00.799112 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.800327 kubelet[2117]: E0711 00:22:00.799327 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.800327 kubelet[2117]: W0711 00:22:00.799336 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.800327 kubelet[2117]: E0711 00:22:00.799347 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.800327 kubelet[2117]: E0711 00:22:00.799732 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.800327 kubelet[2117]: W0711 00:22:00.799744 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.800535 kubelet[2117]: E0711 00:22:00.799756 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.800535 kubelet[2117]: E0711 00:22:00.800016 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.800535 kubelet[2117]: W0711 00:22:00.800027 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.800535 kubelet[2117]: E0711 00:22:00.800036 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.813031 env[1316]: time="2025-07-11T00:22:00.812993346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vscl2,Uid:2500f257-e300-492f-af8f-78bb0c2f94a3,Namespace:calico-system,Attempt:0,} returns sandbox id \"60c72a0dce583b071d2a595c4ee9074776a7dadccae64127d047d2f1fe435421\"" Jul 11 00:22:00.900165 kubelet[2117]: E0711 00:22:00.900130 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.900165 kubelet[2117]: W0711 00:22:00.900151 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.900165 kubelet[2117]: E0711 00:22:00.900170 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.900412 kubelet[2117]: E0711 00:22:00.900387 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.900412 kubelet[2117]: W0711 00:22:00.900399 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.900412 kubelet[2117]: E0711 00:22:00.900414 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.900601 kubelet[2117]: E0711 00:22:00.900582 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.900601 kubelet[2117]: W0711 00:22:00.900594 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.900658 kubelet[2117]: E0711 00:22:00.900607 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.900784 kubelet[2117]: E0711 00:22:00.900766 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.900784 kubelet[2117]: W0711 00:22:00.900777 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.900840 kubelet[2117]: E0711 00:22:00.900789 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.901017 kubelet[2117]: E0711 00:22:00.900997 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.901017 kubelet[2117]: W0711 00:22:00.901010 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.901090 kubelet[2117]: E0711 00:22:00.901025 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.901230 kubelet[2117]: E0711 00:22:00.901218 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.901230 kubelet[2117]: W0711 00:22:00.901229 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.901284 kubelet[2117]: E0711 00:22:00.901243 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.901385 kubelet[2117]: E0711 00:22:00.901376 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.901385 kubelet[2117]: W0711 00:22:00.901385 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.901438 kubelet[2117]: E0711 00:22:00.901397 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.901528 kubelet[2117]: E0711 00:22:00.901518 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.901554 kubelet[2117]: W0711 00:22:00.901527 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.901577 kubelet[2117]: E0711 00:22:00.901552 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.901691 kubelet[2117]: E0711 00:22:00.901680 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.901714 kubelet[2117]: W0711 00:22:00.901690 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.901742 kubelet[2117]: E0711 00:22:00.901710 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.901835 kubelet[2117]: E0711 00:22:00.901825 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.901835 kubelet[2117]: W0711 00:22:00.901834 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.901893 kubelet[2117]: E0711 00:22:00.901851 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.901986 kubelet[2117]: E0711 00:22:00.901976 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.902016 kubelet[2117]: W0711 00:22:00.901990 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.902042 kubelet[2117]: E0711 00:22:00.902018 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.902147 kubelet[2117]: E0711 00:22:00.902137 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.902147 kubelet[2117]: W0711 00:22:00.902146 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.902322 kubelet[2117]: E0711 00:22:00.902210 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.902366 kubelet[2117]: E0711 00:22:00.902337 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.902366 kubelet[2117]: W0711 00:22:00.902345 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.902366 kubelet[2117]: E0711 00:22:00.902358 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.902602 kubelet[2117]: E0711 00:22:00.902587 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.902640 kubelet[2117]: W0711 00:22:00.902602 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.902640 kubelet[2117]: E0711 00:22:00.902617 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.903668 kubelet[2117]: E0711 00:22:00.903638 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.903668 kubelet[2117]: W0711 00:22:00.903652 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.903771 kubelet[2117]: E0711 00:22:00.903712 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.903856 kubelet[2117]: E0711 00:22:00.903825 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.903856 kubelet[2117]: W0711 00:22:00.903852 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.903948 kubelet[2117]: E0711 00:22:00.903878 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.904020 kubelet[2117]: E0711 00:22:00.904002 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.904020 kubelet[2117]: W0711 00:22:00.904009 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.904097 kubelet[2117]: E0711 00:22:00.904077 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.904188 kubelet[2117]: E0711 00:22:00.904174 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.904188 kubelet[2117]: W0711 00:22:00.904183 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.904351 kubelet[2117]: E0711 00:22:00.904258 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.904398 kubelet[2117]: E0711 00:22:00.904371 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.904398 kubelet[2117]: W0711 00:22:00.904380 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.904398 kubelet[2117]: E0711 00:22:00.904394 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.904702 kubelet[2117]: E0711 00:22:00.904688 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.904702 kubelet[2117]: W0711 00:22:00.904701 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.904768 kubelet[2117]: E0711 00:22:00.904716 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.904969 kubelet[2117]: E0711 00:22:00.904954 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.905013 kubelet[2117]: W0711 00:22:00.904972 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.905013 kubelet[2117]: E0711 00:22:00.904990 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.905423 kubelet[2117]: E0711 00:22:00.905410 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.905423 kubelet[2117]: W0711 00:22:00.905422 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.905476 kubelet[2117]: E0711 00:22:00.905463 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.905579 kubelet[2117]: E0711 00:22:00.905570 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.905603 kubelet[2117]: W0711 00:22:00.905579 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.905640 kubelet[2117]: E0711 00:22:00.905629 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.905732 kubelet[2117]: E0711 00:22:00.905723 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.905754 kubelet[2117]: W0711 00:22:00.905733 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.905754 kubelet[2117]: E0711 00:22:00.905743 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.905994 kubelet[2117]: E0711 00:22:00.905981 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.905994 kubelet[2117]: W0711 00:22:00.905992 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.906064 kubelet[2117]: E0711 00:22:00.906002 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:00.912417 kubelet[2117]: E0711 00:22:00.912383 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:00.912417 kubelet[2117]: W0711 00:22:00.912400 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:00.912417 kubelet[2117]: E0711 00:22:00.912411 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:01.229000 audit[2665]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:01.229000 audit[2665]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd5b53000 a2=0 a3=1 items=0 ppid=2270 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:01.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:01.240000 audit[2665]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:01.240000 audit[2665]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd5b53000 a2=0 a3=1 items=0 ppid=2270 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:01.240000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:01.529960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339062472.mount: Deactivated successfully. Jul 11 00:22:02.518931 env[1316]: time="2025-07-11T00:22:02.518870568Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:02.520273 env[1316]: time="2025-07-11T00:22:02.520242395Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:02.522997 env[1316]: time="2025-07-11T00:22:02.522963050Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:02.523613 env[1316]: time="2025-07-11T00:22:02.523587905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 11 00:22:02.524256 env[1316]: time="2025-07-11T00:22:02.524231480Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:02.527263 env[1316]: time="2025-07-11T00:22:02.527219205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 11 00:22:02.540017 env[1316]: time="2025-07-11T00:22:02.539972110Z" level=info msg="CreateContainer within sandbox \"968ea40b232fc33a3b9533e75bcf36b14a9138466a18427c10cb04b3e06fe196\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 11 00:22:02.550631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3567708549.mount: Deactivated successfully. Jul 11 00:22:02.560550 env[1316]: time="2025-07-11T00:22:02.560495754Z" level=info msg="CreateContainer within sandbox \"968ea40b232fc33a3b9533e75bcf36b14a9138466a18427c10cb04b3e06fe196\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8fa1f92479984039a39bc056bec4c4e262bb7cdc583ffa608a870a6e10309a27\"" Jul 11 00:22:02.562456 env[1316]: time="2025-07-11T00:22:02.561316482Z" level=info msg="StartContainer for \"8fa1f92479984039a39bc056bec4c4e262bb7cdc583ffa608a870a6e10309a27\"" Jul 11 00:22:02.668042 env[1316]: time="2025-07-11T00:22:02.667991066Z" level=info msg="StartContainer for \"8fa1f92479984039a39bc056bec4c4e262bb7cdc583ffa608a870a6e10309a27\" returns successfully" Jul 11 00:22:02.880852 kubelet[2117]: E0711 00:22:02.880725 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q27mn" podUID="4568eb55-c992-4e0f-86d7-395721225945" Jul 11 00:22:02.935198 kubelet[2117]: E0711 00:22:02.935157 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:03.010451 kubelet[2117]: E0711 00:22:03.010330 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.010451 kubelet[2117]: W0711 00:22:03.010357 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.010451 kubelet[2117]: E0711 00:22:03.010376 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.010805 kubelet[2117]: E0711 00:22:03.010710 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.010805 kubelet[2117]: W0711 00:22:03.010721 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.010805 kubelet[2117]: E0711 00:22:03.010731 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.011086 kubelet[2117]: E0711 00:22:03.010991 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.011086 kubelet[2117]: W0711 00:22:03.011003 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.011086 kubelet[2117]: E0711 00:22:03.011013 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.011335 kubelet[2117]: E0711 00:22:03.011236 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.011335 kubelet[2117]: W0711 00:22:03.011248 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.011335 kubelet[2117]: E0711 00:22:03.011257 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.011579 kubelet[2117]: E0711 00:22:03.011480 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.011579 kubelet[2117]: W0711 00:22:03.011491 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.011579 kubelet[2117]: E0711 00:22:03.011499 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.011833 kubelet[2117]: E0711 00:22:03.011739 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.011833 kubelet[2117]: W0711 00:22:03.011751 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.011833 kubelet[2117]: E0711 00:22:03.011760 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.012111 kubelet[2117]: E0711 00:22:03.012010 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.012111 kubelet[2117]: W0711 00:22:03.012023 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.012111 kubelet[2117]: E0711 00:22:03.012032 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.012366 kubelet[2117]: E0711 00:22:03.012261 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.012366 kubelet[2117]: W0711 00:22:03.012272 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.012366 kubelet[2117]: E0711 00:22:03.012281 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.012605 kubelet[2117]: E0711 00:22:03.012519 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.012605 kubelet[2117]: W0711 00:22:03.012531 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.012605 kubelet[2117]: E0711 00:22:03.012539 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.012836 kubelet[2117]: E0711 00:22:03.012749 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.012836 kubelet[2117]: W0711 00:22:03.012761 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.012836 kubelet[2117]: E0711 00:22:03.012771 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.013099 kubelet[2117]: E0711 00:22:03.013007 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.013099 kubelet[2117]: W0711 00:22:03.013019 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.013099 kubelet[2117]: E0711 00:22:03.013028 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.013577 kubelet[2117]: E0711 00:22:03.013247 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.013577 kubelet[2117]: W0711 00:22:03.013469 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.013577 kubelet[2117]: E0711 00:22:03.013484 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.013834 kubelet[2117]: E0711 00:22:03.013731 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.013834 kubelet[2117]: W0711 00:22:03.013743 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.013834 kubelet[2117]: E0711 00:22:03.013752 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.014132 kubelet[2117]: E0711 00:22:03.014039 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.014132 kubelet[2117]: W0711 00:22:03.014052 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.014132 kubelet[2117]: E0711 00:22:03.014061 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.014349 kubelet[2117]: E0711 00:22:03.014284 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.014349 kubelet[2117]: W0711 00:22:03.014295 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.014349 kubelet[2117]: E0711 00:22:03.014304 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.017624 kubelet[2117]: E0711 00:22:03.017607 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.017624 kubelet[2117]: W0711 00:22:03.017623 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.017736 kubelet[2117]: E0711 00:22:03.017635 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.017867 kubelet[2117]: E0711 00:22:03.017855 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.017867 kubelet[2117]: W0711 00:22:03.017867 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.018000 kubelet[2117]: E0711 00:22:03.017891 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.018106 kubelet[2117]: E0711 00:22:03.018095 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.018141 kubelet[2117]: W0711 00:22:03.018107 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.018141 kubelet[2117]: E0711 00:22:03.018137 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.018474 kubelet[2117]: E0711 00:22:03.018458 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.018516 kubelet[2117]: W0711 00:22:03.018474 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.018516 kubelet[2117]: E0711 00:22:03.018493 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.018865 kubelet[2117]: E0711 00:22:03.018850 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.018865 kubelet[2117]: W0711 00:22:03.018864 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.018961 kubelet[2117]: E0711 00:22:03.018882 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.019923 kubelet[2117]: E0711 00:22:03.019382 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.019923 kubelet[2117]: W0711 00:22:03.019396 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.019923 kubelet[2117]: E0711 00:22:03.019430 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.019923 kubelet[2117]: E0711 00:22:03.019575 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.019923 kubelet[2117]: W0711 00:22:03.019585 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.019923 kubelet[2117]: E0711 00:22:03.019608 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.019923 kubelet[2117]: E0711 00:22:03.019723 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.019923 kubelet[2117]: W0711 00:22:03.019730 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.019923 kubelet[2117]: E0711 00:22:03.019748 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.019923 kubelet[2117]: E0711 00:22:03.019926 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.020207 kubelet[2117]: W0711 00:22:03.019940 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.020207 kubelet[2117]: E0711 00:22:03.019978 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.020346 kubelet[2117]: E0711 00:22:03.020328 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.020346 kubelet[2117]: W0711 00:22:03.020344 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.020410 kubelet[2117]: E0711 00:22:03.020364 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.020647 kubelet[2117]: E0711 00:22:03.020633 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.020647 kubelet[2117]: W0711 00:22:03.020647 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.020713 kubelet[2117]: E0711 00:22:03.020663 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.020846 kubelet[2117]: E0711 00:22:03.020836 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.020909 kubelet[2117]: W0711 00:22:03.020847 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.020986 kubelet[2117]: E0711 00:22:03.020964 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.021100 kubelet[2117]: E0711 00:22:03.021088 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.021135 kubelet[2117]: W0711 00:22:03.021100 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.021135 kubelet[2117]: E0711 00:22:03.021116 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.021687 kubelet[2117]: E0711 00:22:03.021668 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.021687 kubelet[2117]: W0711 00:22:03.021683 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.021755 kubelet[2117]: E0711 00:22:03.021699 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.022194 kubelet[2117]: E0711 00:22:03.022177 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.022194 kubelet[2117]: W0711 00:22:03.022191 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.022268 kubelet[2117]: E0711 00:22:03.022235 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.022533 kubelet[2117]: E0711 00:22:03.022518 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.022533 kubelet[2117]: W0711 00:22:03.022532 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.022659 kubelet[2117]: E0711 00:22:03.022546 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.022855 kubelet[2117]: E0711 00:22:03.022840 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.022855 kubelet[2117]: W0711 00:22:03.022854 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.022948 kubelet[2117]: E0711 00:22:03.022871 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.023612 kubelet[2117]: E0711 00:22:03.023591 2117 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:22:03.023612 kubelet[2117]: W0711 00:22:03.023607 2117 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:22:03.023691 kubelet[2117]: E0711 00:22:03.023619 2117 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:22:03.484925 env[1316]: time="2025-07-11T00:22:03.484865606Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:03.487270 env[1316]: time="2025-07-11T00:22:03.486830094Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:03.488330 env[1316]: time="2025-07-11T00:22:03.488281041Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:03.490207 env[1316]: time="2025-07-11T00:22:03.490168213Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:03.490854 env[1316]: time="2025-07-11T00:22:03.490806990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 11 00:22:03.493403 env[1316]: time="2025-07-11T00:22:03.493367737Z" level=info msg="CreateContainer within sandbox \"60c72a0dce583b071d2a595c4ee9074776a7dadccae64127d047d2f1fe435421\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 11 00:22:03.503758 env[1316]: time="2025-07-11T00:22:03.503716840Z" level=info msg="CreateContainer within sandbox \"60c72a0dce583b071d2a595c4ee9074776a7dadccae64127d047d2f1fe435421\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ad1e0a5a26386a2d483e26ba97cfe9ce11a0abcbf82988e3280797eec694412a\"" Jul 11 00:22:03.504530 env[1316]: time="2025-07-11T00:22:03.504499812Z" level=info msg="StartContainer for \"ad1e0a5a26386a2d483e26ba97cfe9ce11a0abcbf82988e3280797eec694412a\"" Jul 11 00:22:03.572430 env[1316]: time="2025-07-11T00:22:03.572346026Z" level=info msg="StartContainer for \"ad1e0a5a26386a2d483e26ba97cfe9ce11a0abcbf82988e3280797eec694412a\" returns successfully" Jul 11 00:22:03.612589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad1e0a5a26386a2d483e26ba97cfe9ce11a0abcbf82988e3280797eec694412a-rootfs.mount: Deactivated successfully. Jul 11 00:22:03.625629 env[1316]: time="2025-07-11T00:22:03.625587170Z" level=info msg="shim disconnected" id=ad1e0a5a26386a2d483e26ba97cfe9ce11a0abcbf82988e3280797eec694412a Jul 11 00:22:03.625629 env[1316]: time="2025-07-11T00:22:03.625630689Z" level=warning msg="cleaning up after shim disconnected" id=ad1e0a5a26386a2d483e26ba97cfe9ce11a0abcbf82988e3280797eec694412a namespace=k8s.io Jul 11 00:22:03.625828 env[1316]: time="2025-07-11T00:22:03.625641288Z" level=info msg="cleaning up dead shim" Jul 11 00:22:03.633025 env[1316]: time="2025-07-11T00:22:03.632993141Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:22:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2790 runtime=io.containerd.runc.v2\n" Jul 11 00:22:03.937780 kubelet[2117]: I0711 00:22:03.937655 2117 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:22:03.938424 kubelet[2117]: E0711 00:22:03.937994 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:03.939135 env[1316]: time="2025-07-11T00:22:03.939101774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 11 00:22:03.954144 kubelet[2117]: I0711 00:22:03.953936 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5448f49787-m9jch" podStartSLOduration=2.009253543 podStartE2EDuration="3.953912875s" podCreationTimestamp="2025-07-11 00:22:00 +0000 UTC" firstStartedPulling="2025-07-11 00:22:00.582401999 +0000 UTC m=+20.812859084" lastFinishedPulling="2025-07-11 00:22:02.527061211 +0000 UTC m=+22.757518416" observedRunningTime="2025-07-11 00:22:02.967858199 +0000 UTC m=+23.198315324" watchObservedRunningTime="2025-07-11 00:22:03.953912875 +0000 UTC m=+24.184370000" Jul 11 00:22:04.881515 kubelet[2117]: E0711 00:22:04.881407 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q27mn" podUID="4568eb55-c992-4e0f-86d7-395721225945" Jul 11 00:22:06.769897 env[1316]: time="2025-07-11T00:22:06.769807096Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:06.771453 env[1316]: time="2025-07-11T00:22:06.771418688Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:06.772685 env[1316]: time="2025-07-11T00:22:06.772658291Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:06.773899 env[1316]: time="2025-07-11T00:22:06.773853895Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:06.774290 env[1316]: time="2025-07-11T00:22:06.774263443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 11 00:22:06.776531 env[1316]: time="2025-07-11T00:22:06.776498896Z" level=info msg="CreateContainer within sandbox \"60c72a0dce583b071d2a595c4ee9074776a7dadccae64127d047d2f1fe435421\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 11 00:22:06.789728 env[1316]: time="2025-07-11T00:22:06.789690101Z" level=info msg="CreateContainer within sandbox \"60c72a0dce583b071d2a595c4ee9074776a7dadccae64127d047d2f1fe435421\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e35b2e54b4ca8f8da2e38cc8a4805b8500ccb8fbfd536012a3c0006202545a65\"" Jul 11 00:22:06.790422 env[1316]: time="2025-07-11T00:22:06.790397640Z" level=info msg="StartContainer for \"e35b2e54b4ca8f8da2e38cc8a4805b8500ccb8fbfd536012a3c0006202545a65\"" Jul 11 00:22:06.870940 env[1316]: time="2025-07-11T00:22:06.870877189Z" level=info msg="StartContainer for \"e35b2e54b4ca8f8da2e38cc8a4805b8500ccb8fbfd536012a3c0006202545a65\" returns successfully" Jul 11 00:22:06.880983 kubelet[2117]: E0711 00:22:06.880931 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q27mn" podUID="4568eb55-c992-4e0f-86d7-395721225945" Jul 11 00:22:07.593593 env[1316]: time="2025-07-11T00:22:07.593539174Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:22:07.613420 env[1316]: time="2025-07-11T00:22:07.613375137Z" level=info msg="shim disconnected" id=e35b2e54b4ca8f8da2e38cc8a4805b8500ccb8fbfd536012a3c0006202545a65 Jul 11 00:22:07.613635 env[1316]: time="2025-07-11T00:22:07.613611330Z" level=warning msg="cleaning up after shim disconnected" id=e35b2e54b4ca8f8da2e38cc8a4805b8500ccb8fbfd536012a3c0006202545a65 namespace=k8s.io Jul 11 00:22:07.613698 env[1316]: time="2025-07-11T00:22:07.613685648Z" level=info msg="cleaning up dead shim" Jul 11 00:22:07.620261 env[1316]: time="2025-07-11T00:22:07.620219145Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:22:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2863 runtime=io.containerd.runc.v2\n" Jul 11 00:22:07.634110 kubelet[2117]: I0711 00:22:07.634071 2117 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 11 00:22:07.754736 kubelet[2117]: I0711 00:22:07.754693 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b96db4e-8484-47ca-a223-07747800a0c8-config\") pod \"goldmane-58fd7646b9-z7hg6\" (UID: \"8b96db4e-8484-47ca-a223-07747800a0c8\") " pod="calico-system/goldmane-58fd7646b9-z7hg6" Jul 11 00:22:07.754736 kubelet[2117]: I0711 00:22:07.754739 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b96db4e-8484-47ca-a223-07747800a0c8-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-z7hg6\" (UID: \"8b96db4e-8484-47ca-a223-07747800a0c8\") " pod="calico-system/goldmane-58fd7646b9-z7hg6" Jul 11 00:22:07.754954 kubelet[2117]: I0711 00:22:07.754762 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7ms6\" (UniqueName: \"kubernetes.io/projected/8b96db4e-8484-47ca-a223-07747800a0c8-kube-api-access-s7ms6\") pod \"goldmane-58fd7646b9-z7hg6\" (UID: \"8b96db4e-8484-47ca-a223-07747800a0c8\") " pod="calico-system/goldmane-58fd7646b9-z7hg6" Jul 11 00:22:07.754954 kubelet[2117]: I0711 00:22:07.754779 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-686tg\" (UniqueName: \"kubernetes.io/projected/60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a-kube-api-access-686tg\") pod \"calico-apiserver-5f447458f6-544gl\" (UID: \"60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a\") " pod="calico-apiserver/calico-apiserver-5f447458f6-544gl" Jul 11 00:22:07.754954 kubelet[2117]: I0711 00:22:07.754802 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26d03df9-579e-4ee3-a314-28ef2eef7859-config-volume\") pod \"coredns-7c65d6cfc9-xr7vr\" (UID: \"26d03df9-579e-4ee3-a314-28ef2eef7859\") " pod="kube-system/coredns-7c65d6cfc9-xr7vr" Jul 11 00:22:07.754954 kubelet[2117]: I0711 00:22:07.754820 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbvw7\" (UniqueName: \"kubernetes.io/projected/391acb18-2a72-4443-9d5f-c7fd9457ee12-kube-api-access-tbvw7\") pod \"coredns-7c65d6cfc9-pzkpx\" (UID: \"391acb18-2a72-4443-9d5f-c7fd9457ee12\") " pod="kube-system/coredns-7c65d6cfc9-pzkpx" Jul 11 00:22:07.754954 kubelet[2117]: I0711 00:22:07.754839 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/149173fd-5331-45f1-97cd-d2699b6084a9-tigera-ca-bundle\") pod \"calico-kube-controllers-7f89b684d9-vbwhc\" (UID: \"149173fd-5331-45f1-97cd-d2699b6084a9\") " pod="calico-system/calico-kube-controllers-7f89b684d9-vbwhc" Jul 11 00:22:07.755132 kubelet[2117]: I0711 00:22:07.754856 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/177ece49-6d1a-445b-9476-97fc7ca34320-whisker-backend-key-pair\") pod \"whisker-7474848f44-d8mxf\" (UID: \"177ece49-6d1a-445b-9476-97fc7ca34320\") " pod="calico-system/whisker-7474848f44-d8mxf" Jul 11 00:22:07.755132 kubelet[2117]: I0711 00:22:07.754876 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7bx9\" (UniqueName: \"kubernetes.io/projected/177ece49-6d1a-445b-9476-97fc7ca34320-kube-api-access-g7bx9\") pod \"whisker-7474848f44-d8mxf\" (UID: \"177ece49-6d1a-445b-9476-97fc7ca34320\") " pod="calico-system/whisker-7474848f44-d8mxf" Jul 11 00:22:07.755132 kubelet[2117]: I0711 00:22:07.754908 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv56h\" (UniqueName: \"kubernetes.io/projected/26d03df9-579e-4ee3-a314-28ef2eef7859-kube-api-access-gv56h\") pod \"coredns-7c65d6cfc9-xr7vr\" (UID: \"26d03df9-579e-4ee3-a314-28ef2eef7859\") " pod="kube-system/coredns-7c65d6cfc9-xr7vr" Jul 11 00:22:07.755132 kubelet[2117]: I0711 00:22:07.754928 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0487ab34-6b79-457d-aa32-776a344009da-calico-apiserver-certs\") pod \"calico-apiserver-5f447458f6-qfmmn\" (UID: \"0487ab34-6b79-457d-aa32-776a344009da\") " pod="calico-apiserver/calico-apiserver-5f447458f6-qfmmn" Jul 11 00:22:07.755132 kubelet[2117]: I0711 00:22:07.754943 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkhmd\" (UniqueName: \"kubernetes.io/projected/0487ab34-6b79-457d-aa32-776a344009da-kube-api-access-lkhmd\") pod \"calico-apiserver-5f447458f6-qfmmn\" (UID: \"0487ab34-6b79-457d-aa32-776a344009da\") " pod="calico-apiserver/calico-apiserver-5f447458f6-qfmmn" Jul 11 00:22:07.755289 kubelet[2117]: I0711 00:22:07.754957 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/391acb18-2a72-4443-9d5f-c7fd9457ee12-config-volume\") pod \"coredns-7c65d6cfc9-pzkpx\" (UID: \"391acb18-2a72-4443-9d5f-c7fd9457ee12\") " pod="kube-system/coredns-7c65d6cfc9-pzkpx" Jul 11 00:22:07.755289 kubelet[2117]: I0711 00:22:07.754974 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht8dm\" (UniqueName: \"kubernetes.io/projected/149173fd-5331-45f1-97cd-d2699b6084a9-kube-api-access-ht8dm\") pod \"calico-kube-controllers-7f89b684d9-vbwhc\" (UID: \"149173fd-5331-45f1-97cd-d2699b6084a9\") " pod="calico-system/calico-kube-controllers-7f89b684d9-vbwhc" Jul 11 00:22:07.755289 kubelet[2117]: I0711 00:22:07.754991 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/177ece49-6d1a-445b-9476-97fc7ca34320-whisker-ca-bundle\") pod \"whisker-7474848f44-d8mxf\" (UID: \"177ece49-6d1a-445b-9476-97fc7ca34320\") " pod="calico-system/whisker-7474848f44-d8mxf" Jul 11 00:22:07.755289 kubelet[2117]: I0711 00:22:07.755007 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a-calico-apiserver-certs\") pod \"calico-apiserver-5f447458f6-544gl\" (UID: \"60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a\") " pod="calico-apiserver/calico-apiserver-5f447458f6-544gl" Jul 11 00:22:07.755289 kubelet[2117]: I0711 00:22:07.755024 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8b96db4e-8484-47ca-a223-07747800a0c8-goldmane-key-pair\") pod \"goldmane-58fd7646b9-z7hg6\" (UID: \"8b96db4e-8484-47ca-a223-07747800a0c8\") " pod="calico-system/goldmane-58fd7646b9-z7hg6" Jul 11 00:22:07.784794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e35b2e54b4ca8f8da2e38cc8a4805b8500ccb8fbfd536012a3c0006202545a65-rootfs.mount: Deactivated successfully. Jul 11 00:22:07.953814 env[1316]: time="2025-07-11T00:22:07.952640970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 11 00:22:07.967348 kubelet[2117]: E0711 00:22:07.967296 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:07.967855 env[1316]: time="2025-07-11T00:22:07.967802625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xr7vr,Uid:26d03df9-579e-4ee3-a314-28ef2eef7859,Namespace:kube-system,Attempt:0,}" Jul 11 00:22:07.968361 env[1316]: time="2025-07-11T00:22:07.968328890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f447458f6-544gl,Uid:60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:22:07.977833 kubelet[2117]: E0711 00:22:07.977803 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:07.978755 env[1316]: time="2025-07-11T00:22:07.978548003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pzkpx,Uid:391acb18-2a72-4443-9d5f-c7fd9457ee12,Namespace:kube-system,Attempt:0,}" Jul 11 00:22:07.980839 env[1316]: time="2025-07-11T00:22:07.980774460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-z7hg6,Uid:8b96db4e-8484-47ca-a223-07747800a0c8,Namespace:calico-system,Attempt:0,}" Jul 11 00:22:07.981688 env[1316]: time="2025-07-11T00:22:07.981659515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f447458f6-qfmmn,Uid:0487ab34-6b79-457d-aa32-776a344009da,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:22:07.989209 env[1316]: time="2025-07-11T00:22:07.989169825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f89b684d9-vbwhc,Uid:149173fd-5331-45f1-97cd-d2699b6084a9,Namespace:calico-system,Attempt:0,}" Jul 11 00:22:07.989499 env[1316]: time="2025-07-11T00:22:07.989475896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7474848f44-d8mxf,Uid:177ece49-6d1a-445b-9476-97fc7ca34320,Namespace:calico-system,Attempt:0,}" Jul 11 00:22:08.293155 env[1316]: time="2025-07-11T00:22:08.293073923Z" level=error msg="Failed to destroy network for sandbox \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.294036 env[1316]: time="2025-07-11T00:22:08.293989499Z" level=error msg="encountered an error cleaning up failed sandbox \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.294118 env[1316]: time="2025-07-11T00:22:08.294046058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f447458f6-qfmmn,Uid:0487ab34-6b79-457d-aa32-776a344009da,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.294180 env[1316]: time="2025-07-11T00:22:08.294157175Z" level=error msg="Failed to destroy network for sandbox \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.294469 env[1316]: time="2025-07-11T00:22:08.294432247Z" level=error msg="encountered an error cleaning up failed sandbox \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.294526 env[1316]: time="2025-07-11T00:22:08.294476846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f89b684d9-vbwhc,Uid:149173fd-5331-45f1-97cd-d2699b6084a9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.294857 kubelet[2117]: E0711 00:22:08.294726 2117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.295876 kubelet[2117]: E0711 00:22:08.295427 2117 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f89b684d9-vbwhc" Jul 11 00:22:08.295876 kubelet[2117]: E0711 00:22:08.295469 2117 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f89b684d9-vbwhc" Jul 11 00:22:08.295876 kubelet[2117]: E0711 00:22:08.295514 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f89b684d9-vbwhc_calico-system(149173fd-5331-45f1-97cd-d2699b6084a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f89b684d9-vbwhc_calico-system(149173fd-5331-45f1-97cd-d2699b6084a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f89b684d9-vbwhc" podUID="149173fd-5331-45f1-97cd-d2699b6084a9" Jul 11 00:22:08.296056 kubelet[2117]: E0711 00:22:08.294726 2117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.296056 kubelet[2117]: E0711 00:22:08.295786 2117 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f447458f6-qfmmn" Jul 11 00:22:08.296056 kubelet[2117]: E0711 00:22:08.295805 2117 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f447458f6-qfmmn" Jul 11 00:22:08.296129 kubelet[2117]: E0711 00:22:08.295831 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f447458f6-qfmmn_calico-apiserver(0487ab34-6b79-457d-aa32-776a344009da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f447458f6-qfmmn_calico-apiserver(0487ab34-6b79-457d-aa32-776a344009da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f447458f6-qfmmn" podUID="0487ab34-6b79-457d-aa32-776a344009da" Jul 11 00:22:08.311508 env[1316]: time="2025-07-11T00:22:08.311452999Z" level=error msg="Failed to destroy network for sandbox \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.311866 env[1316]: time="2025-07-11T00:22:08.311828549Z" level=error msg="encountered an error cleaning up failed sandbox \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.311944 env[1316]: time="2025-07-11T00:22:08.311881668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xr7vr,Uid:26d03df9-579e-4ee3-a314-28ef2eef7859,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.312425 kubelet[2117]: E0711 00:22:08.312089 2117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.312425 kubelet[2117]: E0711 00:22:08.312144 2117 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xr7vr" Jul 11 00:22:08.312425 kubelet[2117]: E0711 00:22:08.312163 2117 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xr7vr" Jul 11 00:22:08.312558 kubelet[2117]: E0711 00:22:08.312198 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-xr7vr_kube-system(26d03df9-579e-4ee3-a314-28ef2eef7859)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-xr7vr_kube-system(26d03df9-579e-4ee3-a314-28ef2eef7859)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-xr7vr" podUID="26d03df9-579e-4ee3-a314-28ef2eef7859" Jul 11 00:22:08.313685 env[1316]: time="2025-07-11T00:22:08.313644102Z" level=error msg="Failed to destroy network for sandbox \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.314001 env[1316]: time="2025-07-11T00:22:08.313965053Z" level=error msg="encountered an error cleaning up failed sandbox \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.314072 env[1316]: time="2025-07-11T00:22:08.314008812Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f447458f6-544gl,Uid:60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.314454 kubelet[2117]: E0711 00:22:08.314311 2117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.314454 kubelet[2117]: E0711 00:22:08.314353 2117 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f447458f6-544gl" Jul 11 00:22:08.314454 kubelet[2117]: E0711 00:22:08.314372 2117 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f447458f6-544gl" Jul 11 00:22:08.314572 kubelet[2117]: E0711 00:22:08.314403 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f447458f6-544gl_calico-apiserver(60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f447458f6-544gl_calico-apiserver(60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f447458f6-544gl" podUID="60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a" Jul 11 00:22:08.322279 env[1316]: time="2025-07-11T00:22:08.322221196Z" level=error msg="Failed to destroy network for sandbox \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.322589 env[1316]: time="2025-07-11T00:22:08.322550267Z" level=error msg="encountered an error cleaning up failed sandbox \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.322646 env[1316]: time="2025-07-11T00:22:08.322595186Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-z7hg6,Uid:8b96db4e-8484-47ca-a223-07747800a0c8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.323128 kubelet[2117]: E0711 00:22:08.322797 2117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.323128 kubelet[2117]: E0711 00:22:08.322845 2117 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-z7hg6" Jul 11 00:22:08.323128 kubelet[2117]: E0711 00:22:08.322872 2117 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-z7hg6" Jul 11 00:22:08.323269 kubelet[2117]: E0711 00:22:08.322917 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-z7hg6_calico-system(8b96db4e-8484-47ca-a223-07747800a0c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-z7hg6_calico-system(8b96db4e-8484-47ca-a223-07747800a0c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-z7hg6" podUID="8b96db4e-8484-47ca-a223-07747800a0c8" Jul 11 00:22:08.334849 env[1316]: time="2025-07-11T00:22:08.334785265Z" level=error msg="Failed to destroy network for sandbox \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.335176 env[1316]: time="2025-07-11T00:22:08.335133216Z" level=error msg="encountered an error cleaning up failed sandbox \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.335216 env[1316]: time="2025-07-11T00:22:08.335181855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pzkpx,Uid:391acb18-2a72-4443-9d5f-c7fd9457ee12,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.335654 kubelet[2117]: E0711 00:22:08.335349 2117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.335654 kubelet[2117]: E0711 00:22:08.335389 2117 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pzkpx" Jul 11 00:22:08.335654 kubelet[2117]: E0711 00:22:08.335406 2117 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pzkpx" Jul 11 00:22:08.335799 env[1316]: time="2025-07-11T00:22:08.335347050Z" level=error msg="Failed to destroy network for sandbox \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.335799 env[1316]: time="2025-07-11T00:22:08.335738400Z" level=error msg="encountered an error cleaning up failed sandbox \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.335858 kubelet[2117]: E0711 00:22:08.335437 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-pzkpx_kube-system(391acb18-2a72-4443-9d5f-c7fd9457ee12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-pzkpx_kube-system(391acb18-2a72-4443-9d5f-c7fd9457ee12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pzkpx" podUID="391acb18-2a72-4443-9d5f-c7fd9457ee12" Jul 11 00:22:08.335937 env[1316]: time="2025-07-11T00:22:08.335810518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7474848f44-d8mxf,Uid:177ece49-6d1a-445b-9476-97fc7ca34320,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.336179 kubelet[2117]: E0711 00:22:08.336055 2117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.336179 kubelet[2117]: E0711 00:22:08.336089 2117 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7474848f44-d8mxf" Jul 11 00:22:08.336179 kubelet[2117]: E0711 00:22:08.336107 2117 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7474848f44-d8mxf" Jul 11 00:22:08.336285 kubelet[2117]: E0711 00:22:08.336134 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7474848f44-d8mxf_calico-system(177ece49-6d1a-445b-9476-97fc7ca34320)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7474848f44-d8mxf_calico-system(177ece49-6d1a-445b-9476-97fc7ca34320)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7474848f44-d8mxf" podUID="177ece49-6d1a-445b-9476-97fc7ca34320" Jul 11 00:22:08.883715 env[1316]: time="2025-07-11T00:22:08.883658816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q27mn,Uid:4568eb55-c992-4e0f-86d7-395721225945,Namespace:calico-system,Attempt:0,}" Jul 11 00:22:08.937255 env[1316]: time="2025-07-11T00:22:08.937179007Z" level=error msg="Failed to destroy network for sandbox \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.939619 env[1316]: time="2025-07-11T00:22:08.939575024Z" level=error msg="encountered an error cleaning up failed sandbox \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.939676 env[1316]: time="2025-07-11T00:22:08.939634383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q27mn,Uid:4568eb55-c992-4e0f-86d7-395721225945,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.939928 kubelet[2117]: E0711 00:22:08.939872 2117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.939990 kubelet[2117]: E0711 00:22:08.939948 2117 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q27mn" Jul 11 00:22:08.939990 kubelet[2117]: E0711 00:22:08.939978 2117 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q27mn" Jul 11 00:22:08.940045 kubelet[2117]: E0711 00:22:08.940021 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q27mn_calico-system(4568eb55-c992-4e0f-86d7-395721225945)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q27mn_calico-system(4568eb55-c992-4e0f-86d7-395721225945)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q27mn" podUID="4568eb55-c992-4e0f-86d7-395721225945" Jul 11 00:22:08.941960 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af-shm.mount: Deactivated successfully. Jul 11 00:22:08.953656 kubelet[2117]: I0711 00:22:08.953223 2117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Jul 11 00:22:08.955145 kubelet[2117]: I0711 00:22:08.955002 2117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Jul 11 00:22:08.957215 kubelet[2117]: I0711 00:22:08.956427 2117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Jul 11 00:22:08.957661 env[1316]: time="2025-07-11T00:22:08.957626269Z" level=info msg="StopPodSandbox for \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\"" Jul 11 00:22:08.957954 kubelet[2117]: I0711 00:22:08.957797 2117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Jul 11 00:22:08.958326 env[1316]: time="2025-07-11T00:22:08.957649948Z" level=info msg="StopPodSandbox for \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\"" Jul 11 00:22:08.960022 env[1316]: time="2025-07-11T00:22:08.959961608Z" level=info msg="StopPodSandbox for \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\"" Jul 11 00:22:08.960123 env[1316]: time="2025-07-11T00:22:08.960025486Z" level=info msg="StopPodSandbox for \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\"" Jul 11 00:22:08.962331 kubelet[2117]: I0711 00:22:08.961726 2117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Jul 11 00:22:08.962437 env[1316]: time="2025-07-11T00:22:08.962228388Z" level=info msg="StopPodSandbox for \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\"" Jul 11 00:22:08.962786 kubelet[2117]: I0711 00:22:08.962762 2117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Jul 11 00:22:08.963379 env[1316]: time="2025-07-11T00:22:08.963279800Z" level=info msg="StopPodSandbox for \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\"" Jul 11 00:22:08.965229 kubelet[2117]: I0711 00:22:08.965208 2117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Jul 11 00:22:08.965868 env[1316]: time="2025-07-11T00:22:08.965840613Z" level=info msg="StopPodSandbox for \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\"" Jul 11 00:22:08.970730 kubelet[2117]: I0711 00:22:08.970176 2117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Jul 11 00:22:08.973452 env[1316]: time="2025-07-11T00:22:08.973401094Z" level=info msg="StopPodSandbox for \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\"" Jul 11 00:22:09.009832 env[1316]: time="2025-07-11T00:22:09.009775271Z" level=error msg="StopPodSandbox for \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\" failed" error="failed to destroy network for sandbox \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:09.009972 env[1316]: time="2025-07-11T00:22:09.009769031Z" level=error msg="StopPodSandbox for \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\" failed" error="failed to destroy network for sandbox \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:09.010100 kubelet[2117]: E0711 00:22:09.010049 2117 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Jul 11 00:22:09.010162 kubelet[2117]: E0711 00:22:09.010119 2117 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e"} Jul 11 00:22:09.010212 kubelet[2117]: E0711 00:22:09.010057 2117 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Jul 11 00:22:09.010212 kubelet[2117]: E0711 00:22:09.010176 2117 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"177ece49-6d1a-445b-9476-97fc7ca34320\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:22:09.010212 kubelet[2117]: E0711 00:22:09.010192 2117 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02"} Jul 11 00:22:09.010322 kubelet[2117]: E0711 00:22:09.010209 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"177ece49-6d1a-445b-9476-97fc7ca34320\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7474848f44-d8mxf" podUID="177ece49-6d1a-445b-9476-97fc7ca34320" Jul 11 00:22:09.010322 kubelet[2117]: E0711 00:22:09.010222 2117 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0487ab34-6b79-457d-aa32-776a344009da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:22:09.010322 kubelet[2117]: E0711 00:22:09.010244 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0487ab34-6b79-457d-aa32-776a344009da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f447458f6-qfmmn" podUID="0487ab34-6b79-457d-aa32-776a344009da" Jul 11 00:22:09.013604 env[1316]: time="2025-07-11T00:22:09.013563817Z" level=error msg="StopPodSandbox for \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\" failed" error="failed to destroy network for sandbox \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:09.014386 kubelet[2117]: E0711 00:22:09.014259 2117 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Jul 11 00:22:09.014386 kubelet[2117]: E0711 00:22:09.014311 2117 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef"} Jul 11 00:22:09.014386 kubelet[2117]: E0711 00:22:09.014339 2117 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"149173fd-5331-45f1-97cd-d2699b6084a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:22:09.014386 kubelet[2117]: E0711 00:22:09.014359 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"149173fd-5331-45f1-97cd-d2699b6084a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f89b684d9-vbwhc" podUID="149173fd-5331-45f1-97cd-d2699b6084a9" Jul 11 00:22:09.027268 env[1316]: time="2025-07-11T00:22:09.027204641Z" level=error msg="StopPodSandbox for \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\" failed" error="failed to destroy network for sandbox \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:09.027478 kubelet[2117]: E0711 00:22:09.027429 2117 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Jul 11 00:22:09.027534 kubelet[2117]: E0711 00:22:09.027485 2117 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503"} Jul 11 00:22:09.027534 kubelet[2117]: E0711 00:22:09.027515 2117 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:22:09.027604 kubelet[2117]: E0711 00:22:09.027541 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f447458f6-544gl" podUID="60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a" Jul 11 00:22:09.028200 env[1316]: time="2025-07-11T00:22:09.028158217Z" level=error msg="StopPodSandbox for \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\" failed" error="failed to destroy network for sandbox \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:09.028474 kubelet[2117]: E0711 00:22:09.028367 2117 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Jul 11 00:22:09.028474 kubelet[2117]: E0711 00:22:09.028402 2117 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768"} Jul 11 00:22:09.028474 kubelet[2117]: E0711 00:22:09.028430 2117 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8b96db4e-8484-47ca-a223-07747800a0c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:22:09.028474 kubelet[2117]: E0711 00:22:09.028448 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8b96db4e-8484-47ca-a223-07747800a0c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-z7hg6" podUID="8b96db4e-8484-47ca-a223-07747800a0c8" Jul 11 00:22:09.033010 env[1316]: time="2025-07-11T00:22:09.032970858Z" level=error msg="StopPodSandbox for \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\" failed" error="failed to destroy network for sandbox \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:09.033173 kubelet[2117]: E0711 00:22:09.033141 2117 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Jul 11 00:22:09.033234 kubelet[2117]: E0711 00:22:09.033180 2117 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63"} Jul 11 00:22:09.033234 kubelet[2117]: E0711 00:22:09.033210 2117 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26d03df9-579e-4ee3-a314-28ef2eef7859\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:22:09.033379 kubelet[2117]: E0711 00:22:09.033230 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26d03df9-579e-4ee3-a314-28ef2eef7859\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-xr7vr" podUID="26d03df9-579e-4ee3-a314-28ef2eef7859" Jul 11 00:22:09.041168 env[1316]: time="2025-07-11T00:22:09.041127777Z" level=error msg="StopPodSandbox for \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\" failed" error="failed to destroy network for sandbox \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:09.041416 kubelet[2117]: E0711 00:22:09.041375 2117 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Jul 11 00:22:09.041469 kubelet[2117]: E0711 00:22:09.041428 2117 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36"} Jul 11 00:22:09.041469 kubelet[2117]: E0711 00:22:09.041462 2117 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"391acb18-2a72-4443-9d5f-c7fd9457ee12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:22:09.041542 kubelet[2117]: E0711 00:22:09.041482 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"391acb18-2a72-4443-9d5f-c7fd9457ee12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pzkpx" podUID="391acb18-2a72-4443-9d5f-c7fd9457ee12" Jul 11 00:22:09.048337 env[1316]: time="2025-07-11T00:22:09.048289680Z" level=error msg="StopPodSandbox for \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\" failed" error="failed to destroy network for sandbox \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:09.048528 kubelet[2117]: E0711 00:22:09.048485 2117 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Jul 11 00:22:09.048567 kubelet[2117]: E0711 00:22:09.048531 2117 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af"} Jul 11 00:22:09.048567 kubelet[2117]: E0711 00:22:09.048561 2117 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4568eb55-c992-4e0f-86d7-395721225945\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:22:09.048642 kubelet[2117]: E0711 00:22:09.048581 2117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4568eb55-c992-4e0f-86d7-395721225945\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q27mn" podUID="4568eb55-c992-4e0f-86d7-395721225945" Jul 11 00:22:13.204599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791816289.mount: Deactivated successfully. Jul 11 00:22:13.515124 env[1316]: time="2025-07-11T00:22:13.515066164Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:13.516433 env[1316]: time="2025-07-11T00:22:13.516408299Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:13.517909 env[1316]: time="2025-07-11T00:22:13.517869431Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:13.520266 env[1316]: time="2025-07-11T00:22:13.520230346Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:13.522220 env[1316]: time="2025-07-11T00:22:13.522174869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 11 00:22:13.537737 env[1316]: time="2025-07-11T00:22:13.537691293Z" level=info msg="CreateContainer within sandbox \"60c72a0dce583b071d2a595c4ee9074776a7dadccae64127d047d2f1fe435421\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 11 00:22:13.548679 env[1316]: time="2025-07-11T00:22:13.548627684Z" level=info msg="CreateContainer within sandbox \"60c72a0dce583b071d2a595c4ee9074776a7dadccae64127d047d2f1fe435421\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e694c9ae94ffdbf8bb3761012ed7cc264ab889111b5e34061b025b1740665595\"" Jul 11 00:22:13.549216 env[1316]: time="2025-07-11T00:22:13.549186954Z" level=info msg="StartContainer for \"e694c9ae94ffdbf8bb3761012ed7cc264ab889111b5e34061b025b1740665595\"" Jul 11 00:22:13.619216 env[1316]: time="2025-07-11T00:22:13.619158220Z" level=info msg="StartContainer for \"e694c9ae94ffdbf8bb3761012ed7cc264ab889111b5e34061b025b1740665595\" returns successfully" Jul 11 00:22:13.822242 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 11 00:22:13.822384 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 11 00:22:13.909261 env[1316]: time="2025-07-11T00:22:13.909208330Z" level=info msg="StopPodSandbox for \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\"" Jul 11 00:22:13.997912 kubelet[2117]: I0711 00:22:13.997831 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vscl2" podStartSLOduration=1.28870342 podStartE2EDuration="13.997814721s" podCreationTimestamp="2025-07-11 00:22:00 +0000 UTC" firstStartedPulling="2025-07-11 00:22:00.815267566 +0000 UTC m=+21.045724651" lastFinishedPulling="2025-07-11 00:22:13.524378827 +0000 UTC m=+33.754835952" observedRunningTime="2025-07-11 00:22:13.997064215 +0000 UTC m=+34.227521340" watchObservedRunningTime="2025-07-11 00:22:13.997814721 +0000 UTC m=+34.228271886" Jul 11 00:22:14.253433 env[1316]: 2025-07-11 00:22:14.054 [INFO][3366] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Jul 11 00:22:14.253433 env[1316]: 2025-07-11 00:22:14.058 [INFO][3366] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" iface="eth0" netns="/var/run/netns/cni-5dd16c05-7c5e-4800-c90c-1ced87cdb939" Jul 11 00:22:14.253433 env[1316]: 2025-07-11 00:22:14.058 [INFO][3366] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" iface="eth0" netns="/var/run/netns/cni-5dd16c05-7c5e-4800-c90c-1ced87cdb939" Jul 11 00:22:14.253433 env[1316]: 2025-07-11 00:22:14.061 [INFO][3366] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" iface="eth0" netns="/var/run/netns/cni-5dd16c05-7c5e-4800-c90c-1ced87cdb939" Jul 11 00:22:14.253433 env[1316]: 2025-07-11 00:22:14.063 [INFO][3366] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Jul 11 00:22:14.253433 env[1316]: 2025-07-11 00:22:14.063 [INFO][3366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Jul 11 00:22:14.253433 env[1316]: 2025-07-11 00:22:14.239 [INFO][3403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" HandleID="k8s-pod-network.0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Workload="localhost-k8s-whisker--7474848f44--d8mxf-eth0" Jul 11 00:22:14.253433 env[1316]: 2025-07-11 00:22:14.239 [INFO][3403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:14.253433 env[1316]: 2025-07-11 00:22:14.239 [INFO][3403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:14.253433 env[1316]: 2025-07-11 00:22:14.248 [WARNING][3403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" HandleID="k8s-pod-network.0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Workload="localhost-k8s-whisker--7474848f44--d8mxf-eth0" Jul 11 00:22:14.253433 env[1316]: 2025-07-11 00:22:14.248 [INFO][3403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" HandleID="k8s-pod-network.0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Workload="localhost-k8s-whisker--7474848f44--d8mxf-eth0" Jul 11 00:22:14.253433 env[1316]: 2025-07-11 00:22:14.250 [INFO][3403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:14.253433 env[1316]: 2025-07-11 00:22:14.251 [INFO][3366] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Jul 11 00:22:14.256149 env[1316]: time="2025-07-11T00:22:14.255825906Z" level=info msg="TearDown network for sandbox \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\" successfully" Jul 11 00:22:14.256149 env[1316]: time="2025-07-11T00:22:14.255869865Z" level=info msg="StopPodSandbox for \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\" returns successfully" Jul 11 00:22:14.255681 systemd[1]: run-netns-cni\x2d5dd16c05\x2d7c5e\x2d4800\x2dc90c\x2d1ced87cdb939.mount: Deactivated successfully. Jul 11 00:22:14.399258 kubelet[2117]: I0711 00:22:14.399214 2117 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7bx9\" (UniqueName: \"kubernetes.io/projected/177ece49-6d1a-445b-9476-97fc7ca34320-kube-api-access-g7bx9\") pod \"177ece49-6d1a-445b-9476-97fc7ca34320\" (UID: \"177ece49-6d1a-445b-9476-97fc7ca34320\") " Jul 11 00:22:14.399409 kubelet[2117]: I0711 00:22:14.399270 2117 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/177ece49-6d1a-445b-9476-97fc7ca34320-whisker-backend-key-pair\") pod \"177ece49-6d1a-445b-9476-97fc7ca34320\" (UID: \"177ece49-6d1a-445b-9476-97fc7ca34320\") " Jul 11 00:22:14.399409 kubelet[2117]: I0711 00:22:14.399289 2117 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/177ece49-6d1a-445b-9476-97fc7ca34320-whisker-ca-bundle\") pod \"177ece49-6d1a-445b-9476-97fc7ca34320\" (UID: \"177ece49-6d1a-445b-9476-97fc7ca34320\") " Jul 11 00:22:14.403274 kubelet[2117]: I0711 00:22:14.402427 2117 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177ece49-6d1a-445b-9476-97fc7ca34320-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "177ece49-6d1a-445b-9476-97fc7ca34320" (UID: "177ece49-6d1a-445b-9476-97fc7ca34320"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 00:22:14.404850 kubelet[2117]: I0711 00:22:14.404817 2117 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177ece49-6d1a-445b-9476-97fc7ca34320-kube-api-access-g7bx9" (OuterVolumeSpecName: "kube-api-access-g7bx9") pod "177ece49-6d1a-445b-9476-97fc7ca34320" (UID: "177ece49-6d1a-445b-9476-97fc7ca34320"). InnerVolumeSpecName "kube-api-access-g7bx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:22:14.406211 systemd[1]: var-lib-kubelet-pods-177ece49\x2d6d1a\x2d445b\x2d9476\x2d97fc7ca34320-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg7bx9.mount: Deactivated successfully. Jul 11 00:22:14.407247 kubelet[2117]: I0711 00:22:14.407218 2117 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177ece49-6d1a-445b-9476-97fc7ca34320-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "177ece49-6d1a-445b-9476-97fc7ca34320" (UID: "177ece49-6d1a-445b-9476-97fc7ca34320"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 00:22:14.408868 systemd[1]: var-lib-kubelet-pods-177ece49\x2d6d1a\x2d445b\x2d9476\x2d97fc7ca34320-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 11 00:22:14.499601 kubelet[2117]: I0711 00:22:14.499563 2117 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7bx9\" (UniqueName: \"kubernetes.io/projected/177ece49-6d1a-445b-9476-97fc7ca34320-kube-api-access-g7bx9\") on node \"localhost\" DevicePath \"\"" Jul 11 00:22:14.499781 kubelet[2117]: I0711 00:22:14.499767 2117 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/177ece49-6d1a-445b-9476-97fc7ca34320-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 11 00:22:14.499845 kubelet[2117]: I0711 00:22:14.499834 2117 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/177ece49-6d1a-445b-9476-97fc7ca34320-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 11 00:22:15.104305 kubelet[2117]: I0711 00:22:15.103865 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c757ecf-5c1d-49d4-8250-b9218d03ab6c-whisker-ca-bundle\") pod \"whisker-77bb47f84d-lfd95\" (UID: \"3c757ecf-5c1d-49d4-8250-b9218d03ab6c\") " pod="calico-system/whisker-77bb47f84d-lfd95" Jul 11 00:22:15.104305 kubelet[2117]: I0711 00:22:15.103934 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqjn6\" (UniqueName: \"kubernetes.io/projected/3c757ecf-5c1d-49d4-8250-b9218d03ab6c-kube-api-access-dqjn6\") pod \"whisker-77bb47f84d-lfd95\" (UID: \"3c757ecf-5c1d-49d4-8250-b9218d03ab6c\") " pod="calico-system/whisker-77bb47f84d-lfd95" Jul 11 00:22:15.104305 kubelet[2117]: I0711 00:22:15.103955 2117 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3c757ecf-5c1d-49d4-8250-b9218d03ab6c-whisker-backend-key-pair\") pod \"whisker-77bb47f84d-lfd95\" (UID: \"3c757ecf-5c1d-49d4-8250-b9218d03ab6c\") " pod="calico-system/whisker-77bb47f84d-lfd95" Jul 11 00:22:15.200000 audit[3499]: AVC avc: denied { write } for pid=3499 comm="tee" name="fd" dev="proc" ino=17996 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 11 00:22:15.204197 kernel: kauditd_printk_skb: 8 callbacks suppressed Jul 11 00:22:15.204289 kernel: audit: type=1400 audit(1752193335.200:293): avc: denied { write } for pid=3499 comm="tee" name="fd" dev="proc" ino=17996 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 11 00:22:15.204317 kernel: audit: type=1300 audit(1752193335.200:293): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffebc07da a2=241 a3=1b6 items=1 ppid=3455 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:15.200000 audit[3499]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffebc07da a2=241 a3=1b6 items=1 ppid=3455 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:15.206001 systemd[1]: run-containerd-runc-k8s.io-e694c9ae94ffdbf8bb3761012ed7cc264ab889111b5e34061b025b1740665595-runc.8s8vNj.mount: Deactivated successfully. Jul 11 00:22:15.200000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 11 00:22:15.209018 kernel: audit: type=1307 audit(1752193335.200:293): cwd="/etc/service/enabled/node-status-reporter/log" Jul 11 00:22:15.209072 kernel: audit: type=1302 audit(1752193335.200:293): item=0 name="/dev/fd/63" inode=17993 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 11 00:22:15.200000 audit: PATH item=0 name="/dev/fd/63" inode=17993 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 11 00:22:15.200000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 11 00:22:15.218677 kernel: audit: type=1327 audit(1752193335.200:293): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 11 00:22:15.240000 audit[3519]: AVC avc: denied { write } for pid=3519 comm="tee" name="fd" dev="proc" ino=20190 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 11 00:22:15.240000 audit[3519]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc080e7ea a2=241 a3=1b6 items=1 ppid=3462 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:15.248241 kernel: audit: type=1400 audit(1752193335.240:294): avc: denied { write } for pid=3519 comm="tee" name="fd" dev="proc" ino=20190 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 11 00:22:15.248326 kernel: audit: type=1300 audit(1752193335.240:294): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc080e7ea a2=241 a3=1b6 items=1 ppid=3462 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:15.248349 kernel: audit: type=1307 audit(1752193335.240:294): cwd="/etc/service/enabled/bird/log" Jul 11 00:22:15.240000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 11 00:22:15.240000 audit: PATH item=0 name="/dev/fd/63" inode=19087 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 11 00:22:15.251850 kernel: audit: type=1302 audit(1752193335.240:294): item=0 name="/dev/fd/63" inode=19087 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 11 00:22:15.251939 kernel: audit: type=1327 audit(1752193335.240:294): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 11 00:22:15.240000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 11 00:22:15.240000 audit[3515]: AVC avc: denied { write } for pid=3515 comm="tee" name="fd" dev="proc" ino=20196 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 11 00:22:15.240000 audit[3515]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe649f7e9 a2=241 a3=1b6 items=1 ppid=3461 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:15.240000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 11 00:22:15.240000 audit: PATH item=0 name="/dev/fd/63" inode=18000 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 11 00:22:15.240000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 11 00:22:15.243000 audit[3526]: AVC avc: denied { write } for pid=3526 comm="tee" name="fd" dev="proc" ino=20200 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 11 00:22:15.243000 audit[3526]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff149d7eb a2=241 a3=1b6 items=1 ppid=3459 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:15.243000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 11 00:22:15.243000 audit: PATH item=0 name="/dev/fd/63" inode=19094 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 11 00:22:15.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 11 00:22:15.257000 audit[3532]: AVC avc: denied { write } for pid=3532 comm="tee" name="fd" dev="proc" ino=18007 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 11 00:22:15.257000 audit[3532]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff22ff7e9 a2=241 a3=1b6 items=1 ppid=3465 pid=3532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:15.257000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 11 00:22:15.257000 audit: PATH item=0 name="/dev/fd/63" inode=19095 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 11 00:22:15.257000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 11 00:22:15.258000 audit[3535]: AVC avc: denied { write } for pid=3535 comm="tee" name="fd" dev="proc" ino=18011 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 11 00:22:15.258000 audit[3535]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffa6b87e9 a2=241 a3=1b6 items=1 ppid=3471 pid=3535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:15.258000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 11 00:22:15.258000 audit: PATH item=0 name="/dev/fd/63" inode=19096 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 11 00:22:15.258000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 11 00:22:15.261000 audit[3528]: AVC avc: denied { write } for pid=3528 comm="tee" name="fd" dev="proc" ino=19099 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 11 00:22:15.261000 audit[3528]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd682f7d9 a2=241 a3=1b6 items=1 ppid=3454 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:15.261000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 11 00:22:15.261000 audit: PATH item=0 name="/dev/fd/63" inode=17194 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 11 00:22:15.261000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 11 00:22:15.321274 env[1316]: time="2025-07-11T00:22:15.321187528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77bb47f84d-lfd95,Uid:3c757ecf-5c1d-49d4-8250-b9218d03ab6c,Namespace:calico-system,Attempt:0,}" Jul 11 00:22:15.457072 systemd-networkd[1099]: cali8a8928cf33d: Link UP Jul 11 00:22:15.459114 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 11 00:22:15.459203 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8a8928cf33d: link becomes ready Jul 11 00:22:15.459272 systemd-networkd[1099]: cali8a8928cf33d: Gained carrier Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.370 [INFO][3548] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.385 [INFO][3548] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--77bb47f84d--lfd95-eth0 whisker-77bb47f84d- calico-system 3c757ecf-5c1d-49d4-8250-b9218d03ab6c 886 0 2025-07-11 00:22:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77bb47f84d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-77bb47f84d-lfd95 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8a8928cf33d [] [] }} ContainerID="3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" Namespace="calico-system" Pod="whisker-77bb47f84d-lfd95" WorkloadEndpoint="localhost-k8s-whisker--77bb47f84d--lfd95-" Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.385 [INFO][3548] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" Namespace="calico-system" Pod="whisker-77bb47f84d-lfd95" WorkloadEndpoint="localhost-k8s-whisker--77bb47f84d--lfd95-eth0" Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.411 [INFO][3562] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" HandleID="k8s-pod-network.3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" Workload="localhost-k8s-whisker--77bb47f84d--lfd95-eth0" Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.411 [INFO][3562] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" HandleID="k8s-pod-network.3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" Workload="localhost-k8s-whisker--77bb47f84d--lfd95-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000322470), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-77bb47f84d-lfd95", "timestamp":"2025-07-11 00:22:15.411675545 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.411 [INFO][3562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.411 [INFO][3562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.411 [INFO][3562] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.421 [INFO][3562] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" host="localhost" Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.429 [INFO][3562] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.433 [INFO][3562] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.435 [INFO][3562] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.437 [INFO][3562] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.437 [INFO][3562] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" host="localhost" Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.439 [INFO][3562] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353 Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.442 [INFO][3562] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" host="localhost" Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.447 [INFO][3562] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" host="localhost" Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.447 [INFO][3562] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" host="localhost" Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.447 [INFO][3562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:15.474121 env[1316]: 2025-07-11 00:22:15.447 [INFO][3562] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" HandleID="k8s-pod-network.3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" Workload="localhost-k8s-whisker--77bb47f84d--lfd95-eth0" Jul 11 00:22:15.474768 env[1316]: 2025-07-11 00:22:15.450 [INFO][3548] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" Namespace="calico-system" Pod="whisker-77bb47f84d-lfd95" WorkloadEndpoint="localhost-k8s-whisker--77bb47f84d--lfd95-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77bb47f84d--lfd95-eth0", GenerateName:"whisker-77bb47f84d-", Namespace:"calico-system", SelfLink:"", UID:"3c757ecf-5c1d-49d4-8250-b9218d03ab6c", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 22, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77bb47f84d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-77bb47f84d-lfd95", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8a8928cf33d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:15.474768 env[1316]: 2025-07-11 00:22:15.450 [INFO][3548] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" Namespace="calico-system" Pod="whisker-77bb47f84d-lfd95" WorkloadEndpoint="localhost-k8s-whisker--77bb47f84d--lfd95-eth0" Jul 11 00:22:15.474768 env[1316]: 2025-07-11 00:22:15.450 [INFO][3548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a8928cf33d ContainerID="3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" Namespace="calico-system" Pod="whisker-77bb47f84d-lfd95" WorkloadEndpoint="localhost-k8s-whisker--77bb47f84d--lfd95-eth0" Jul 11 00:22:15.474768 env[1316]: 2025-07-11 00:22:15.459 [INFO][3548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" Namespace="calico-system" Pod="whisker-77bb47f84d-lfd95" WorkloadEndpoint="localhost-k8s-whisker--77bb47f84d--lfd95-eth0" Jul 11 00:22:15.474768 env[1316]: 2025-07-11 00:22:15.461 [INFO][3548] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" Namespace="calico-system" Pod="whisker-77bb47f84d-lfd95" WorkloadEndpoint="localhost-k8s-whisker--77bb47f84d--lfd95-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77bb47f84d--lfd95-eth0", GenerateName:"whisker-77bb47f84d-", Namespace:"calico-system", SelfLink:"", UID:"3c757ecf-5c1d-49d4-8250-b9218d03ab6c", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 22, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77bb47f84d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353", Pod:"whisker-77bb47f84d-lfd95", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8a8928cf33d", MAC:"3a:e7:dc:2b:2b:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:15.474768 env[1316]: 2025-07-11 00:22:15.471 [INFO][3548] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353" Namespace="calico-system" Pod="whisker-77bb47f84d-lfd95" WorkloadEndpoint="localhost-k8s-whisker--77bb47f84d--lfd95-eth0" Jul 11 00:22:15.483395 env[1316]: time="2025-07-11T00:22:15.483334378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:15.483395 env[1316]: time="2025-07-11T00:22:15.483377738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:15.483395 env[1316]: time="2025-07-11T00:22:15.483388138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:15.483556 env[1316]: time="2025-07-11T00:22:15.483507017Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353 pid=3585 runtime=io.containerd.runc.v2 Jul 11 00:22:15.531863 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:15.549249 env[1316]: time="2025-07-11T00:22:15.549187521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77bb47f84d-lfd95,Uid:3c757ecf-5c1d-49d4-8250-b9218d03ab6c,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353\"" Jul 11 00:22:15.550692 env[1316]: time="2025-07-11T00:22:15.550659393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 11 00:22:15.882812 kubelet[2117]: I0711 00:22:15.882775 2117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="177ece49-6d1a-445b-9476-97fc7ca34320" path="/var/lib/kubelet/pods/177ece49-6d1a-445b-9476-97fc7ca34320/volumes" Jul 11 00:22:16.585850 env[1316]: time="2025-07-11T00:22:16.585800695Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:16.587496 env[1316]: time="2025-07-11T00:22:16.587460287Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:16.589083 env[1316]: time="2025-07-11T00:22:16.589045799Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:16.590965 env[1316]: time="2025-07-11T00:22:16.590936070Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:16.591554 env[1316]: time="2025-07-11T00:22:16.591521947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 11 00:22:16.594690 env[1316]: time="2025-07-11T00:22:16.594651131Z" level=info msg="CreateContainer within sandbox \"3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 11 00:22:16.607064 env[1316]: time="2025-07-11T00:22:16.607020309Z" level=info msg="CreateContainer within sandbox \"3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ad480d8e574fbb819af6a123e6f405f88960fd18f91dcae3164e886fcfed6467\"" Jul 11 00:22:16.607754 env[1316]: time="2025-07-11T00:22:16.607725186Z" level=info msg="StartContainer for \"ad480d8e574fbb819af6a123e6f405f88960fd18f91dcae3164e886fcfed6467\"" Jul 11 00:22:16.678734 env[1316]: time="2025-07-11T00:22:16.678685033Z" level=info msg="StartContainer for \"ad480d8e574fbb819af6a123e6f405f88960fd18f91dcae3164e886fcfed6467\" returns successfully" Jul 11 00:22:16.684137 env[1316]: time="2025-07-11T00:22:16.684098926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 11 00:22:17.082022 systemd-networkd[1099]: cali8a8928cf33d: Gained IPv6LL Jul 11 00:22:18.344414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount888386206.mount: Deactivated successfully. Jul 11 00:22:18.356992 env[1316]: time="2025-07-11T00:22:18.356944991Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:18.358720 env[1316]: time="2025-07-11T00:22:18.358678303Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:18.360334 env[1316]: time="2025-07-11T00:22:18.360305815Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:18.361656 env[1316]: time="2025-07-11T00:22:18.361625769Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:18.362168 env[1316]: time="2025-07-11T00:22:18.362139486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 11 00:22:18.364728 env[1316]: time="2025-07-11T00:22:18.364692674Z" level=info msg="CreateContainer within sandbox \"3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 11 00:22:18.375674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1031639554.mount: Deactivated successfully. Jul 11 00:22:18.381247 env[1316]: time="2025-07-11T00:22:18.381179917Z" level=info msg="CreateContainer within sandbox \"3a7bccdb46ebfca534554e7ea52b92d0a97ee699379d9b7c020ec4aa38831353\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2ee4468dc489e5579cbfd0d79d93f4b3a1365d9df6765462cd2b73c8141fdb5e\"" Jul 11 00:22:18.382107 env[1316]: time="2025-07-11T00:22:18.382081473Z" level=info msg="StartContainer for \"2ee4468dc489e5579cbfd0d79d93f4b3a1365d9df6765462cd2b73c8141fdb5e\"" Jul 11 00:22:18.464063 env[1316]: time="2025-07-11T00:22:18.464015007Z" level=info msg="StartContainer for \"2ee4468dc489e5579cbfd0d79d93f4b3a1365d9df6765462cd2b73c8141fdb5e\" returns successfully" Jul 11 00:22:19.055000 audit[3790]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:19.055000 audit[3790]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc62ed670 a2=0 a3=1 items=0 ppid=2270 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:19.055000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:19.064000 audit[3790]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:19.064000 audit[3790]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffc62ed670 a2=0 a3=1 items=0 ppid=2270 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:19.064000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:19.168619 systemd[1]: Started sshd@7-10.0.0.33:22-10.0.0.1:50162.service. Jul 11 00:22:19.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.33:22-10.0.0.1:50162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:19.219000 audit[3791]: USER_ACCT pid=3791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:19.220407 sshd[3791]: Accepted publickey for core from 10.0.0.1 port 50162 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:22:19.221000 audit[3791]: CRED_ACQ pid=3791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:19.221000 audit[3791]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc2edb990 a2=3 a3=1 items=0 ppid=1 pid=3791 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:19.221000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:19.222635 sshd[3791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:22:19.226820 systemd-logind[1302]: New session 8 of user core. Jul 11 00:22:19.227650 systemd[1]: Started session-8.scope. Jul 11 00:22:19.230000 audit[3791]: USER_START pid=3791 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:19.231000 audit[3795]: CRED_ACQ pid=3795 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:19.388089 sshd[3791]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:19.388000 audit[3791]: USER_END pid=3791 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:19.388000 audit[3791]: CRED_DISP pid=3791 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:19.391079 systemd[1]: sshd@7-10.0.0.33:22-10.0.0.1:50162.service: Deactivated successfully. Jul 11 00:22:19.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.33:22-10.0.0.1:50162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:19.392019 systemd-logind[1302]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:22:19.392079 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:22:19.392978 systemd-logind[1302]: Removed session 8. Jul 11 00:22:19.882775 env[1316]: time="2025-07-11T00:22:19.882720042Z" level=info msg="StopPodSandbox for \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\"" Jul 11 00:22:19.883129 env[1316]: time="2025-07-11T00:22:19.882720122Z" level=info msg="StopPodSandbox for \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\"" Jul 11 00:22:19.927519 kubelet[2117]: I0711 00:22:19.926950 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-77bb47f84d-lfd95" podStartSLOduration=2.114005513 podStartE2EDuration="4.92693196s" podCreationTimestamp="2025-07-11 00:22:15 +0000 UTC" firstStartedPulling="2025-07-11 00:22:15.550254715 +0000 UTC m=+35.780711840" lastFinishedPulling="2025-07-11 00:22:18.363181162 +0000 UTC m=+38.593638287" observedRunningTime="2025-07-11 00:22:19.005745377 +0000 UTC m=+39.236202502" watchObservedRunningTime="2025-07-11 00:22:19.92693196 +0000 UTC m=+40.157389085" Jul 11 00:22:19.967392 env[1316]: 2025-07-11 00:22:19.928 [INFO][3853] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Jul 11 00:22:19.967392 env[1316]: 2025-07-11 00:22:19.928 [INFO][3853] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" iface="eth0" netns="/var/run/netns/cni-ad7df772-6a5c-407f-a278-1c0a55dba0c6" Jul 11 00:22:19.967392 env[1316]: 2025-07-11 00:22:19.928 [INFO][3853] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" iface="eth0" netns="/var/run/netns/cni-ad7df772-6a5c-407f-a278-1c0a55dba0c6" Jul 11 00:22:19.967392 env[1316]: 2025-07-11 00:22:19.928 [INFO][3853] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" iface="eth0" netns="/var/run/netns/cni-ad7df772-6a5c-407f-a278-1c0a55dba0c6" Jul 11 00:22:19.967392 env[1316]: 2025-07-11 00:22:19.928 [INFO][3853] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Jul 11 00:22:19.967392 env[1316]: 2025-07-11 00:22:19.928 [INFO][3853] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Jul 11 00:22:19.967392 env[1316]: 2025-07-11 00:22:19.949 [INFO][3870] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" HandleID="k8s-pod-network.f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Workload="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" Jul 11 00:22:19.967392 env[1316]: 2025-07-11 00:22:19.949 [INFO][3870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:19.967392 env[1316]: 2025-07-11 00:22:19.949 [INFO][3870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:19.967392 env[1316]: 2025-07-11 00:22:19.959 [WARNING][3870] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" HandleID="k8s-pod-network.f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Workload="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" Jul 11 00:22:19.967392 env[1316]: 2025-07-11 00:22:19.959 [INFO][3870] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" HandleID="k8s-pod-network.f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Workload="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" Jul 11 00:22:19.967392 env[1316]: 2025-07-11 00:22:19.961 [INFO][3870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:19.967392 env[1316]: 2025-07-11 00:22:19.965 [INFO][3853] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Jul 11 00:22:19.967839 env[1316]: time="2025-07-11T00:22:19.967552774Z" level=info msg="TearDown network for sandbox \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\" successfully" Jul 11 00:22:19.967839 env[1316]: time="2025-07-11T00:22:19.967596093Z" level=info msg="StopPodSandbox for \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\" returns successfully" Jul 11 00:22:19.969804 systemd[1]: run-netns-cni\x2dad7df772\x2d6a5c\x2d407f\x2da278\x2d1c0a55dba0c6.mount: Deactivated successfully. Jul 11 00:22:19.970860 env[1316]: time="2025-07-11T00:22:19.970823879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-z7hg6,Uid:8b96db4e-8484-47ca-a223-07747800a0c8,Namespace:calico-system,Attempt:1,}" Jul 11 00:22:19.991808 env[1316]: 2025-07-11 00:22:19.934 [INFO][3854] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Jul 11 00:22:19.991808 env[1316]: 2025-07-11 00:22:19.935 [INFO][3854] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" iface="eth0" netns="/var/run/netns/cni-41a3d089-99d3-5271-6dbe-54f057318278" Jul 11 00:22:19.991808 env[1316]: 2025-07-11 00:22:19.935 [INFO][3854] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" iface="eth0" netns="/var/run/netns/cni-41a3d089-99d3-5271-6dbe-54f057318278" Jul 11 00:22:19.991808 env[1316]: 2025-07-11 00:22:19.935 [INFO][3854] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" iface="eth0" netns="/var/run/netns/cni-41a3d089-99d3-5271-6dbe-54f057318278" Jul 11 00:22:19.991808 env[1316]: 2025-07-11 00:22:19.935 [INFO][3854] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Jul 11 00:22:19.991808 env[1316]: 2025-07-11 00:22:19.935 [INFO][3854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Jul 11 00:22:19.991808 env[1316]: 2025-07-11 00:22:19.951 [INFO][3876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" HandleID="k8s-pod-network.a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Workload="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" Jul 11 00:22:19.991808 env[1316]: 2025-07-11 00:22:19.951 [INFO][3876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:19.991808 env[1316]: 2025-07-11 00:22:19.961 [INFO][3876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:19.991808 env[1316]: 2025-07-11 00:22:19.977 [WARNING][3876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" HandleID="k8s-pod-network.a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Workload="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" Jul 11 00:22:19.991808 env[1316]: 2025-07-11 00:22:19.978 [INFO][3876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" HandleID="k8s-pod-network.a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Workload="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" Jul 11 00:22:19.991808 env[1316]: 2025-07-11 00:22:19.981 [INFO][3876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:19.991808 env[1316]: 2025-07-11 00:22:19.988 [INFO][3854] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Jul 11 00:22:19.992523 env[1316]: time="2025-07-11T00:22:19.992482739Z" level=info msg="TearDown network for sandbox \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\" successfully" Jul 11 00:22:19.992633 env[1316]: time="2025-07-11T00:22:19.992613699Z" level=info msg="StopPodSandbox for \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\" returns successfully" Jul 11 00:22:19.993406 kubelet[2117]: E0711 00:22:19.992940 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:19.994764 systemd[1]: run-netns-cni\x2d41a3d089\x2d99d3\x2d5271\x2d6dbe\x2d54f057318278.mount: Deactivated successfully. Jul 11 00:22:19.995920 env[1316]: time="2025-07-11T00:22:19.995774604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xr7vr,Uid:26d03df9-579e-4ee3-a314-28ef2eef7859,Namespace:kube-system,Attempt:1,}" Jul 11 00:22:20.105589 systemd-networkd[1099]: cali735d7e9929e: Link UP Jul 11 00:22:20.107647 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 11 00:22:20.107738 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali735d7e9929e: link becomes ready Jul 11 00:22:20.107786 systemd-networkd[1099]: cali735d7e9929e: Gained carrier Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.013 [INFO][3887] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.027 [INFO][3887] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0 goldmane-58fd7646b9- calico-system 8b96db4e-8484-47ca-a223-07747800a0c8 958 0 2025-07-11 00:22:00 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-z7hg6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali735d7e9929e [] [] }} ContainerID="421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" Namespace="calico-system" Pod="goldmane-58fd7646b9-z7hg6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--z7hg6-" Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.027 [INFO][3887] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" Namespace="calico-system" Pod="goldmane-58fd7646b9-z7hg6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.065 [INFO][3915] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" HandleID="k8s-pod-network.421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" Workload="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.065 [INFO][3915] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" HandleID="k8s-pod-network.421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" Workload="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-z7hg6", "timestamp":"2025-07-11 00:22:20.065316894 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.065 [INFO][3915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.065 [INFO][3915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.065 [INFO][3915] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.075 [INFO][3915] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" host="localhost" Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.081 [INFO][3915] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.085 [INFO][3915] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.087 [INFO][3915] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.090 [INFO][3915] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.090 [INFO][3915] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" host="localhost" Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.091 [INFO][3915] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.095 [INFO][3915] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" host="localhost" Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.101 [INFO][3915] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" host="localhost" Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.101 [INFO][3915] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" host="localhost" Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.101 [INFO][3915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:20.123163 env[1316]: 2025-07-11 00:22:20.101 [INFO][3915] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" HandleID="k8s-pod-network.421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" Workload="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" Jul 11 00:22:20.123759 env[1316]: 2025-07-11 00:22:20.103 [INFO][3887] cni-plugin/k8s.go 418: Populated endpoint ContainerID="421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" Namespace="calico-system" Pod="goldmane-58fd7646b9-z7hg6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"8b96db4e-8484-47ca-a223-07747800a0c8", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-z7hg6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali735d7e9929e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:20.123759 env[1316]: 2025-07-11 00:22:20.103 [INFO][3887] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" Namespace="calico-system" Pod="goldmane-58fd7646b9-z7hg6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" Jul 11 00:22:20.123759 env[1316]: 2025-07-11 00:22:20.103 [INFO][3887] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali735d7e9929e ContainerID="421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" Namespace="calico-system" Pod="goldmane-58fd7646b9-z7hg6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" Jul 11 00:22:20.123759 env[1316]: 2025-07-11 00:22:20.108 [INFO][3887] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" Namespace="calico-system" Pod="goldmane-58fd7646b9-z7hg6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" Jul 11 00:22:20.123759 env[1316]: 2025-07-11 00:22:20.109 [INFO][3887] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" Namespace="calico-system" Pod="goldmane-58fd7646b9-z7hg6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"8b96db4e-8484-47ca-a223-07747800a0c8", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb", Pod:"goldmane-58fd7646b9-z7hg6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali735d7e9929e", MAC:"8e:6e:78:46:6c:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:20.123759 env[1316]: 2025-07-11 00:22:20.121 [INFO][3887] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb" Namespace="calico-system" Pod="goldmane-58fd7646b9-z7hg6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" Jul 11 00:22:20.131851 env[1316]: time="2025-07-11T00:22:20.131783718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:20.131851 env[1316]: time="2025-07-11T00:22:20.131840038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:20.132035 env[1316]: time="2025-07-11T00:22:20.131855318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:20.132175 env[1316]: time="2025-07-11T00:22:20.132119877Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb pid=3949 runtime=io.containerd.runc.v2 Jul 11 00:22:20.169776 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:20.196609 env[1316]: time="2025-07-11T00:22:20.196540470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-z7hg6,Uid:8b96db4e-8484-47ca-a223-07747800a0c8,Namespace:calico-system,Attempt:1,} returns sandbox id \"421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb\"" Jul 11 00:22:20.199407 env[1316]: time="2025-07-11T00:22:20.199375537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 11 00:22:20.213175 systemd-networkd[1099]: cali4c5a69422d5: Link UP Jul 11 00:22:20.214557 systemd-networkd[1099]: cali4c5a69422d5: Gained carrier Jul 11 00:22:20.214915 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4c5a69422d5: link becomes ready Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.033 [INFO][3902] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.055 [INFO][3902] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0 coredns-7c65d6cfc9- kube-system 26d03df9-579e-4ee3-a314-28ef2eef7859 959 0 2025-07-11 00:21:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-xr7vr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4c5a69422d5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xr7vr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xr7vr-" Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.055 [INFO][3902] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xr7vr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.090 [INFO][3923] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" HandleID="k8s-pod-network.c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" Workload="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.090 [INFO][3923] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" HandleID="k8s-pod-network.c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" Workload="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323480), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-xr7vr", "timestamp":"2025-07-11 00:22:20.090368982 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.090 [INFO][3923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.101 [INFO][3923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.101 [INFO][3923] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.175 [INFO][3923] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" host="localhost" Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.181 [INFO][3923] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.185 [INFO][3923] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.187 [INFO][3923] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.190 [INFO][3923] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.190 [INFO][3923] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" host="localhost" Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.192 [INFO][3923] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7 Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.196 [INFO][3923] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" host="localhost" Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.205 [INFO][3923] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" host="localhost" Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.206 [INFO][3923] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" host="localhost" Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.206 [INFO][3923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:20.229788 env[1316]: 2025-07-11 00:22:20.206 [INFO][3923] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" HandleID="k8s-pod-network.c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" Workload="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" Jul 11 00:22:20.230478 env[1316]: 2025-07-11 00:22:20.211 [INFO][3902] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xr7vr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"26d03df9-579e-4ee3-a314-28ef2eef7859", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-xr7vr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c5a69422d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:20.230478 env[1316]: 2025-07-11 00:22:20.211 [INFO][3902] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xr7vr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" Jul 11 00:22:20.230478 env[1316]: 2025-07-11 00:22:20.211 [INFO][3902] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c5a69422d5 ContainerID="c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xr7vr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" Jul 11 00:22:20.230478 env[1316]: 2025-07-11 00:22:20.215 [INFO][3902] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xr7vr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" Jul 11 00:22:20.230478 env[1316]: 2025-07-11 00:22:20.215 [INFO][3902] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xr7vr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"26d03df9-579e-4ee3-a314-28ef2eef7859", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7", Pod:"coredns-7c65d6cfc9-xr7vr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c5a69422d5", MAC:"2e:1a:f7:0f:04:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:20.230478 env[1316]: 2025-07-11 00:22:20.228 [INFO][3902] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xr7vr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" Jul 11 00:22:20.275429 env[1316]: time="2025-07-11T00:22:20.275244079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:20.275429 env[1316]: time="2025-07-11T00:22:20.275283359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:20.275429 env[1316]: time="2025-07-11T00:22:20.275293839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:20.275660 env[1316]: time="2025-07-11T00:22:20.275505478Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7 pid=3997 runtime=io.containerd.runc.v2 Jul 11 00:22:20.317369 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:20.338134 env[1316]: time="2025-07-11T00:22:20.338080559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xr7vr,Uid:26d03df9-579e-4ee3-a314-28ef2eef7859,Namespace:kube-system,Attempt:1,} returns sandbox id \"c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7\"" Jul 11 00:22:20.338923 kubelet[2117]: E0711 00:22:20.338856 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:20.341649 env[1316]: time="2025-07-11T00:22:20.341572344Z" level=info msg="CreateContainer within sandbox \"c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:22:20.353642 env[1316]: time="2025-07-11T00:22:20.353600170Z" level=info msg="CreateContainer within sandbox \"c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc854dfa4afd638a007b100dc5b15557d55629c25c695e6e860fab9a63a09ca3\"" Jul 11 00:22:20.354065 env[1316]: time="2025-07-11T00:22:20.354026048Z" level=info msg="StartContainer for \"bc854dfa4afd638a007b100dc5b15557d55629c25c695e6e860fab9a63a09ca3\"" Jul 11 00:22:20.404750 env[1316]: time="2025-07-11T00:22:20.404709783Z" level=info msg="StartContainer for \"bc854dfa4afd638a007b100dc5b15557d55629c25c695e6e860fab9a63a09ca3\" returns successfully" Jul 11 00:22:20.881676 env[1316]: time="2025-07-11T00:22:20.881637019Z" level=info msg="StopPodSandbox for \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\"" Jul 11 00:22:20.955862 env[1316]: 2025-07-11 00:22:20.921 [INFO][4103] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Jul 11 00:22:20.955862 env[1316]: 2025-07-11 00:22:20.922 [INFO][4103] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" iface="eth0" netns="/var/run/netns/cni-b097d746-1d1e-6a62-0435-cc69c312e4d9" Jul 11 00:22:20.955862 env[1316]: 2025-07-11 00:22:20.922 [INFO][4103] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" iface="eth0" netns="/var/run/netns/cni-b097d746-1d1e-6a62-0435-cc69c312e4d9" Jul 11 00:22:20.955862 env[1316]: 2025-07-11 00:22:20.922 [INFO][4103] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" iface="eth0" netns="/var/run/netns/cni-b097d746-1d1e-6a62-0435-cc69c312e4d9" Jul 11 00:22:20.955862 env[1316]: 2025-07-11 00:22:20.922 [INFO][4103] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Jul 11 00:22:20.955862 env[1316]: 2025-07-11 00:22:20.922 [INFO][4103] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Jul 11 00:22:20.955862 env[1316]: 2025-07-11 00:22:20.942 [INFO][4112] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" HandleID="k8s-pod-network.a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Workload="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" Jul 11 00:22:20.955862 env[1316]: 2025-07-11 00:22:20.942 [INFO][4112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:20.955862 env[1316]: 2025-07-11 00:22:20.942 [INFO][4112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:20.955862 env[1316]: 2025-07-11 00:22:20.950 [WARNING][4112] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" HandleID="k8s-pod-network.a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Workload="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" Jul 11 00:22:20.955862 env[1316]: 2025-07-11 00:22:20.951 [INFO][4112] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" HandleID="k8s-pod-network.a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Workload="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" Jul 11 00:22:20.955862 env[1316]: 2025-07-11 00:22:20.952 [INFO][4112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:20.955862 env[1316]: 2025-07-11 00:22:20.954 [INFO][4103] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Jul 11 00:22:20.956514 env[1316]: time="2025-07-11T00:22:20.955999968Z" level=info msg="TearDown network for sandbox \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\" successfully" Jul 11 00:22:20.956514 env[1316]: time="2025-07-11T00:22:20.956030848Z" level=info msg="StopPodSandbox for \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\" returns successfully" Jul 11 00:22:20.956787 env[1316]: time="2025-07-11T00:22:20.956739044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f447458f6-qfmmn,Uid:0487ab34-6b79-457d-aa32-776a344009da,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:22:20.972580 systemd[1]: run-netns-cni\x2db097d746\x2d1d1e\x2d6a62\x2d0435\x2dcc69c312e4d9.mount: Deactivated successfully. Jul 11 00:22:21.001922 kubelet[2117]: E0711 00:22:21.001827 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:21.014023 kubelet[2117]: I0711 00:22:21.013613 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xr7vr" podStartSLOduration=34.013596233 podStartE2EDuration="34.013596233s" podCreationTimestamp="2025-07-11 00:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:22:21.0119842 +0000 UTC m=+41.242441365" watchObservedRunningTime="2025-07-11 00:22:21.013596233 +0000 UTC m=+41.244053358" Jul 11 00:22:21.019000 audit[4135]: NETFILTER_CFG table=filter:101 family=2 entries=20 op=nft_register_rule pid=4135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:21.021184 kernel: kauditd_printk_skb: 42 callbacks suppressed Jul 11 00:22:21.021235 kernel: audit: type=1325 audit(1752193341.019:311): table=filter:101 family=2 entries=20 op=nft_register_rule pid=4135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:21.019000 audit[4135]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcef701d0 a2=0 a3=1 items=0 ppid=2270 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:21.026971 kernel: audit: type=1300 audit(1752193341.019:311): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcef701d0 a2=0 a3=1 items=0 ppid=2270 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:21.019000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:21.032151 kernel: audit: type=1327 audit(1752193341.019:311): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:21.032000 audit[4135]: NETFILTER_CFG table=nat:102 family=2 entries=14 op=nft_register_rule pid=4135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:21.032000 audit[4135]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcef701d0 a2=0 a3=1 items=0 ppid=2270 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:21.039529 kernel: audit: type=1325 audit(1752193341.032:312): table=nat:102 family=2 entries=14 op=nft_register_rule pid=4135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:21.039590 kernel: audit: type=1300 audit(1752193341.032:312): arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcef701d0 a2=0 a3=1 items=0 ppid=2270 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:21.032000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:21.042630 kernel: audit: type=1327 audit(1752193341.032:312): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:21.045000 audit[4144]: NETFILTER_CFG table=filter:103 family=2 entries=17 op=nft_register_rule pid=4144 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:21.045000 audit[4144]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd5be9660 a2=0 a3=1 items=0 ppid=2270 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:21.052471 kernel: audit: type=1325 audit(1752193341.045:313): table=filter:103 family=2 entries=17 op=nft_register_rule pid=4144 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:21.052556 kernel: audit: type=1300 audit(1752193341.045:313): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd5be9660 a2=0 a3=1 items=0 ppid=2270 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:21.045000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:21.054299 kernel: audit: type=1327 audit(1752193341.045:313): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:21.054341 kernel: audit: type=1325 audit(1752193341.052:314): table=nat:104 family=2 entries=35 op=nft_register_chain pid=4144 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:21.052000 audit[4144]: NETFILTER_CFG table=nat:104 family=2 entries=35 op=nft_register_chain pid=4144 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:21.052000 audit[4144]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd5be9660 a2=0 a3=1 items=0 ppid=2270 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:21.052000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:21.099767 systemd-networkd[1099]: cali0fa25579d8a: Link UP Jul 11 00:22:21.100905 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0fa25579d8a: link becomes ready Jul 11 00:22:21.101018 systemd-networkd[1099]: cali0fa25579d8a: Gained carrier Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:20.989 [INFO][4121] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.010 [INFO][4121] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0 calico-apiserver-5f447458f6- calico-apiserver 0487ab34-6b79-457d-aa32-776a344009da 978 0 2025-07-11 00:21:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f447458f6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f447458f6-qfmmn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0fa25579d8a [] [] }} ContainerID="921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" Namespace="calico-apiserver" Pod="calico-apiserver-5f447458f6-qfmmn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-" Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.010 [INFO][4121] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" Namespace="calico-apiserver" Pod="calico-apiserver-5f447458f6-qfmmn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.053 [INFO][4137] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" HandleID="k8s-pod-network.921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" Workload="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.054 [INFO][4137] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" HandleID="k8s-pod-network.921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" Workload="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f447458f6-qfmmn", "timestamp":"2025-07-11 00:22:21.053944658 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.054 [INFO][4137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.054 [INFO][4137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.054 [INFO][4137] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.064 [INFO][4137] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" host="localhost" Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.071 [INFO][4137] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.075 [INFO][4137] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.078 [INFO][4137] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.081 [INFO][4137] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.081 [INFO][4137] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" host="localhost" Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.082 [INFO][4137] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151 Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.089 [INFO][4137] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" host="localhost" Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.096 [INFO][4137] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" host="localhost" Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.096 [INFO][4137] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" host="localhost" Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.096 [INFO][4137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:21.112934 env[1316]: 2025-07-11 00:22:21.096 [INFO][4137] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" HandleID="k8s-pod-network.921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" Workload="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" Jul 11 00:22:21.113508 env[1316]: 2025-07-11 00:22:21.098 [INFO][4121] cni-plugin/k8s.go 418: Populated endpoint ContainerID="921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" Namespace="calico-apiserver" Pod="calico-apiserver-5f447458f6-qfmmn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0", GenerateName:"calico-apiserver-5f447458f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"0487ab34-6b79-457d-aa32-776a344009da", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f447458f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f447458f6-qfmmn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0fa25579d8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:21.113508 env[1316]: 2025-07-11 00:22:21.098 [INFO][4121] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" Namespace="calico-apiserver" Pod="calico-apiserver-5f447458f6-qfmmn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" Jul 11 00:22:21.113508 env[1316]: 2025-07-11 00:22:21.098 [INFO][4121] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0fa25579d8a ContainerID="921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" Namespace="calico-apiserver" Pod="calico-apiserver-5f447458f6-qfmmn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" Jul 11 00:22:21.113508 env[1316]: 2025-07-11 00:22:21.101 [INFO][4121] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" Namespace="calico-apiserver" Pod="calico-apiserver-5f447458f6-qfmmn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" Jul 11 00:22:21.113508 env[1316]: 2025-07-11 00:22:21.102 [INFO][4121] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" Namespace="calico-apiserver" Pod="calico-apiserver-5f447458f6-qfmmn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0", GenerateName:"calico-apiserver-5f447458f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"0487ab34-6b79-457d-aa32-776a344009da", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f447458f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151", Pod:"calico-apiserver-5f447458f6-qfmmn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0fa25579d8a", MAC:"46:34:3a:70:ca:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:21.113508 env[1316]: 2025-07-11 00:22:21.110 [INFO][4121] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151" Namespace="calico-apiserver" Pod="calico-apiserver-5f447458f6-qfmmn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" Jul 11 00:22:21.123227 env[1316]: time="2025-07-11T00:22:21.123156678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:21.123332 env[1316]: time="2025-07-11T00:22:21.123237918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:21.123332 env[1316]: time="2025-07-11T00:22:21.123264718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:21.123518 env[1316]: time="2025-07-11T00:22:21.123481637Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151 pid=4161 runtime=io.containerd.runc.v2 Jul 11 00:22:21.166444 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:21.190957 env[1316]: time="2025-07-11T00:22:21.190917505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f447458f6-qfmmn,Uid:0487ab34-6b79-457d-aa32-776a344009da,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151\"" Jul 11 00:22:21.498003 systemd-networkd[1099]: cali4c5a69422d5: Gained IPv6LL Jul 11 00:22:21.882033 systemd-networkd[1099]: cali735d7e9929e: Gained IPv6LL Jul 11 00:22:22.004994 kubelet[2117]: E0711 00:22:22.004601 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:22.020474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1526976301.mount: Deactivated successfully. Jul 11 00:22:22.458045 systemd-networkd[1099]: cali0fa25579d8a: Gained IPv6LL Jul 11 00:22:22.581180 env[1316]: time="2025-07-11T00:22:22.581119231Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:22.583688 env[1316]: time="2025-07-11T00:22:22.583642420Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:22.585492 env[1316]: time="2025-07-11T00:22:22.585462093Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:22.587795 env[1316]: time="2025-07-11T00:22:22.587759283Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:22.588374 env[1316]: time="2025-07-11T00:22:22.588338080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 11 00:22:22.589762 env[1316]: time="2025-07-11T00:22:22.589541835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:22:22.592299 env[1316]: time="2025-07-11T00:22:22.592265344Z" level=info msg="CreateContainer within sandbox \"421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 11 00:22:22.607135 env[1316]: time="2025-07-11T00:22:22.607087681Z" level=info msg="CreateContainer within sandbox \"421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"288e96fc2f9eff07e79c299a656604981da3eebefb2d38ef761a8d7b5d220438\"" Jul 11 00:22:22.608820 env[1316]: time="2025-07-11T00:22:22.608792834Z" level=info msg="StartContainer for \"288e96fc2f9eff07e79c299a656604981da3eebefb2d38ef761a8d7b5d220438\"" Jul 11 00:22:22.668437 env[1316]: time="2025-07-11T00:22:22.668392383Z" level=info msg="StartContainer for \"288e96fc2f9eff07e79c299a656604981da3eebefb2d38ef761a8d7b5d220438\" returns successfully" Jul 11 00:22:22.882531 env[1316]: time="2025-07-11T00:22:22.882471321Z" level=info msg="StopPodSandbox for \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\"" Jul 11 00:22:22.882953 env[1316]: time="2025-07-11T00:22:22.882920319Z" level=info msg="StopPodSandbox for \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\"" Jul 11 00:22:22.883016 env[1316]: time="2025-07-11T00:22:22.882926199Z" level=info msg="StopPodSandbox for \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\"" Jul 11 00:22:23.008253 kubelet[2117]: E0711 00:22:23.007528 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:23.034000 audit[4362]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=4362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:23.034000 audit[4362]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffff7cec90 a2=0 a3=1 items=0 ppid=2270 pid=4362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:23.034000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:23.036069 env[1316]: 2025-07-11 00:22:22.968 [INFO][4315] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Jul 11 00:22:23.036069 env[1316]: 2025-07-11 00:22:22.968 [INFO][4315] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" iface="eth0" netns="/var/run/netns/cni-2714f2f1-1a44-2f25-9d27-e0670fa02a58" Jul 11 00:22:23.036069 env[1316]: 2025-07-11 00:22:22.970 [INFO][4315] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" iface="eth0" netns="/var/run/netns/cni-2714f2f1-1a44-2f25-9d27-e0670fa02a58" Jul 11 00:22:23.036069 env[1316]: 2025-07-11 00:22:22.970 [INFO][4315] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" iface="eth0" netns="/var/run/netns/cni-2714f2f1-1a44-2f25-9d27-e0670fa02a58" Jul 11 00:22:23.036069 env[1316]: 2025-07-11 00:22:22.970 [INFO][4315] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Jul 11 00:22:23.036069 env[1316]: 2025-07-11 00:22:22.970 [INFO][4315] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Jul 11 00:22:23.036069 env[1316]: 2025-07-11 00:22:23.014 [INFO][4337] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" HandleID="k8s-pod-network.2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Workload="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" Jul 11 00:22:23.036069 env[1316]: 2025-07-11 00:22:23.014 [INFO][4337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:23.036069 env[1316]: 2025-07-11 00:22:23.014 [INFO][4337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:23.036069 env[1316]: 2025-07-11 00:22:23.025 [WARNING][4337] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" HandleID="k8s-pod-network.2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Workload="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" Jul 11 00:22:23.036069 env[1316]: 2025-07-11 00:22:23.025 [INFO][4337] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" HandleID="k8s-pod-network.2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Workload="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" Jul 11 00:22:23.036069 env[1316]: 2025-07-11 00:22:23.030 [INFO][4337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:23.036069 env[1316]: 2025-07-11 00:22:23.033 [INFO][4315] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Jul 11 00:22:23.038407 systemd[1]: run-netns-cni\x2d2714f2f1\x2d1a44\x2d2f25\x2d9d27\x2de0670fa02a58.mount: Deactivated successfully. Jul 11 00:22:23.038715 env[1316]: time="2025-07-11T00:22:23.038472708Z" level=info msg="TearDown network for sandbox \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\" successfully" Jul 11 00:22:23.038715 env[1316]: time="2025-07-11T00:22:23.038512948Z" level=info msg="StopPodSandbox for \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\" returns successfully" Jul 11 00:22:23.039357 env[1316]: time="2025-07-11T00:22:23.039324944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f447458f6-544gl,Uid:60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:22:23.038000 audit[4362]: NETFILTER_CFG table=nat:106 family=2 entries=20 op=nft_register_rule pid=4362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:23.038000 audit[4362]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffff7cec90 a2=0 a3=1 items=0 ppid=2270 pid=4362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:23.038000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:23.049498 env[1316]: 2025-07-11 00:22:22.966 [INFO][4302] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Jul 11 00:22:23.049498 env[1316]: 2025-07-11 00:22:22.966 [INFO][4302] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" iface="eth0" netns="/var/run/netns/cni-c078de67-049b-2ba1-8d9f-6403fa81e230" Jul 11 00:22:23.049498 env[1316]: 2025-07-11 00:22:22.966 [INFO][4302] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" iface="eth0" netns="/var/run/netns/cni-c078de67-049b-2ba1-8d9f-6403fa81e230" Jul 11 00:22:23.049498 env[1316]: 2025-07-11 00:22:22.966 [INFO][4302] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" iface="eth0" netns="/var/run/netns/cni-c078de67-049b-2ba1-8d9f-6403fa81e230" Jul 11 00:22:23.049498 env[1316]: 2025-07-11 00:22:22.967 [INFO][4302] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Jul 11 00:22:23.049498 env[1316]: 2025-07-11 00:22:22.967 [INFO][4302] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Jul 11 00:22:23.049498 env[1316]: 2025-07-11 00:22:23.014 [INFO][4335] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" HandleID="k8s-pod-network.0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Workload="localhost-k8s-csi--node--driver--q27mn-eth0" Jul 11 00:22:23.049498 env[1316]: 2025-07-11 00:22:23.021 [INFO][4335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:23.049498 env[1316]: 2025-07-11 00:22:23.030 [INFO][4335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:23.049498 env[1316]: 2025-07-11 00:22:23.041 [WARNING][4335] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" HandleID="k8s-pod-network.0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Workload="localhost-k8s-csi--node--driver--q27mn-eth0" Jul 11 00:22:23.049498 env[1316]: 2025-07-11 00:22:23.041 [INFO][4335] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" HandleID="k8s-pod-network.0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Workload="localhost-k8s-csi--node--driver--q27mn-eth0" Jul 11 00:22:23.049498 env[1316]: 2025-07-11 00:22:23.042 [INFO][4335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:23.049498 env[1316]: 2025-07-11 00:22:23.047 [INFO][4302] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Jul 11 00:22:23.052352 systemd[1]: run-netns-cni\x2dc078de67\x2d049b\x2d2ba1\x2d8d9f\x2d6403fa81e230.mount: Deactivated successfully. Jul 11 00:22:23.052600 env[1316]: time="2025-07-11T00:22:23.052564250Z" level=info msg="TearDown network for sandbox \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\" successfully" Jul 11 00:22:23.052679 env[1316]: time="2025-07-11T00:22:23.052663010Z" level=info msg="StopPodSandbox for \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\" returns successfully" Jul 11 00:22:23.056026 env[1316]: time="2025-07-11T00:22:23.055969916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q27mn,Uid:4568eb55-c992-4e0f-86d7-395721225945,Namespace:calico-system,Attempt:1,}" Jul 11 00:22:23.064989 env[1316]: 2025-07-11 00:22:22.983 [INFO][4320] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Jul 11 00:22:23.064989 env[1316]: 2025-07-11 00:22:22.983 [INFO][4320] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" iface="eth0" netns="/var/run/netns/cni-fab3f626-8ea4-a375-e50e-fce9151abb72" Jul 11 00:22:23.064989 env[1316]: 2025-07-11 00:22:22.983 [INFO][4320] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" iface="eth0" netns="/var/run/netns/cni-fab3f626-8ea4-a375-e50e-fce9151abb72" Jul 11 00:22:23.064989 env[1316]: 2025-07-11 00:22:22.986 [INFO][4320] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" iface="eth0" netns="/var/run/netns/cni-fab3f626-8ea4-a375-e50e-fce9151abb72" Jul 11 00:22:23.064989 env[1316]: 2025-07-11 00:22:22.986 [INFO][4320] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Jul 11 00:22:23.064989 env[1316]: 2025-07-11 00:22:22.986 [INFO][4320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Jul 11 00:22:23.064989 env[1316]: 2025-07-11 00:22:23.029 [INFO][4346] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" HandleID="k8s-pod-network.cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Workload="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" Jul 11 00:22:23.064989 env[1316]: 2025-07-11 00:22:23.030 [INFO][4346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:23.064989 env[1316]: 2025-07-11 00:22:23.042 [INFO][4346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:23.064989 env[1316]: 2025-07-11 00:22:23.058 [WARNING][4346] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" HandleID="k8s-pod-network.cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Workload="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" Jul 11 00:22:23.064989 env[1316]: 2025-07-11 00:22:23.058 [INFO][4346] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" HandleID="k8s-pod-network.cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Workload="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" Jul 11 00:22:23.064989 env[1316]: 2025-07-11 00:22:23.060 [INFO][4346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:23.064989 env[1316]: 2025-07-11 00:22:23.063 [INFO][4320] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Jul 11 00:22:23.065463 env[1316]: time="2025-07-11T00:22:23.065191878Z" level=info msg="TearDown network for sandbox \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\" successfully" Jul 11 00:22:23.065463 env[1316]: time="2025-07-11T00:22:23.065224238Z" level=info msg="StopPodSandbox for \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\" returns successfully" Jul 11 00:22:23.065755 kubelet[2117]: E0711 00:22:23.065714 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:23.066674 env[1316]: time="2025-07-11T00:22:23.066630752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pzkpx,Uid:391acb18-2a72-4443-9d5f-c7fd9457ee12,Namespace:kube-system,Attempt:1,}" Jul 11 00:22:23.181580 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 11 00:22:23.181708 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie7b7ab47d90: link becomes ready Jul 11 00:22:23.179823 systemd-networkd[1099]: calie7b7ab47d90: Link UP Jul 11 00:22:23.184067 systemd-networkd[1099]: calie7b7ab47d90: Gained carrier Jul 11 00:22:23.192861 kubelet[2117]: I0711 00:22:23.192810 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-z7hg6" podStartSLOduration=20.802519937 podStartE2EDuration="23.192795075s" podCreationTimestamp="2025-07-11 00:22:00 +0000 UTC" firstStartedPulling="2025-07-11 00:22:20.199095658 +0000 UTC m=+40.429552783" lastFinishedPulling="2025-07-11 00:22:22.589370796 +0000 UTC m=+42.819827921" observedRunningTime="2025-07-11 00:22:23.023943447 +0000 UTC m=+43.254400572" watchObservedRunningTime="2025-07-11 00:22:23.192795075 +0000 UTC m=+43.423252200" Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.086 [INFO][4363] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.099 [INFO][4363] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0 calico-apiserver-5f447458f6- calico-apiserver 60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a 1013 0 2025-07-11 00:21:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f447458f6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f447458f6-544gl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie7b7ab47d90 [] [] }} ContainerID="456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" Namespace="calico-apiserver" Pod="calico-apiserver-5f447458f6-544gl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f447458f6--544gl-" Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.099 [INFO][4363] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" Namespace="calico-apiserver" Pod="calico-apiserver-5f447458f6-544gl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.135 [INFO][4406] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" HandleID="k8s-pod-network.456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" Workload="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.135 [INFO][4406] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" HandleID="k8s-pod-network.456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" Workload="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b1480), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f447458f6-544gl", "timestamp":"2025-07-11 00:22:23.135244671 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.135 [INFO][4406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.135 [INFO][4406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.135 [INFO][4406] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.148 [INFO][4406] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" host="localhost" Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.152 [INFO][4406] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.156 [INFO][4406] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.158 [INFO][4406] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.160 [INFO][4406] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.160 [INFO][4406] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" host="localhost" Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.161 [INFO][4406] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200 Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.165 [INFO][4406] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" host="localhost" Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.171 [INFO][4406] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" host="localhost" Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.171 [INFO][4406] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" host="localhost" Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.171 [INFO][4406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:23.194502 env[1316]: 2025-07-11 00:22:23.172 [INFO][4406] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" HandleID="k8s-pod-network.456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" Workload="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" Jul 11 00:22:23.195066 env[1316]: 2025-07-11 00:22:23.178 [INFO][4363] cni-plugin/k8s.go 418: Populated endpoint ContainerID="456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" Namespace="calico-apiserver" Pod="calico-apiserver-5f447458f6-544gl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0", GenerateName:"calico-apiserver-5f447458f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f447458f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f447458f6-544gl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7b7ab47d90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:23.195066 env[1316]: 2025-07-11 00:22:23.178 [INFO][4363] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" Namespace="calico-apiserver" Pod="calico-apiserver-5f447458f6-544gl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" Jul 11 00:22:23.195066 env[1316]: 2025-07-11 00:22:23.178 [INFO][4363] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7b7ab47d90 ContainerID="456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" Namespace="calico-apiserver" Pod="calico-apiserver-5f447458f6-544gl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" Jul 11 00:22:23.195066 env[1316]: 2025-07-11 00:22:23.181 [INFO][4363] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" Namespace="calico-apiserver" Pod="calico-apiserver-5f447458f6-544gl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" Jul 11 00:22:23.195066 env[1316]: 2025-07-11 00:22:23.183 [INFO][4363] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" Namespace="calico-apiserver" Pod="calico-apiserver-5f447458f6-544gl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0", GenerateName:"calico-apiserver-5f447458f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f447458f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200", Pod:"calico-apiserver-5f447458f6-544gl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7b7ab47d90", MAC:"fa:8b:ea:69:08:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:23.195066 env[1316]: 2025-07-11 00:22:23.192 [INFO][4363] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200" Namespace="calico-apiserver" Pod="calico-apiserver-5f447458f6-544gl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" Jul 11 00:22:23.208507 env[1316]: time="2025-07-11T00:22:23.208429011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:23.208507 env[1316]: time="2025-07-11T00:22:23.208475411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:23.208507 env[1316]: time="2025-07-11T00:22:23.208500931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:23.209155 env[1316]: time="2025-07-11T00:22:23.209103968Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200 pid=4446 runtime=io.containerd.runc.v2 Jul 11 00:22:23.253662 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:23.279716 env[1316]: time="2025-07-11T00:22:23.279674439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f447458f6-544gl,Uid:60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200\"" Jul 11 00:22:23.288324 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib894ea9de32: link becomes ready Jul 11 00:22:23.286277 systemd-networkd[1099]: calib894ea9de32: Link UP Jul 11 00:22:23.288741 systemd-networkd[1099]: calib894ea9de32: Gained carrier Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.097 [INFO][4374] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.113 [INFO][4374] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--q27mn-eth0 csi-node-driver- calico-system 4568eb55-c992-4e0f-86d7-395721225945 1012 0 2025-07-11 00:22:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-q27mn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib894ea9de32 [] [] }} ContainerID="f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" Namespace="calico-system" Pod="csi-node-driver-q27mn" WorkloadEndpoint="localhost-k8s-csi--node--driver--q27mn-" Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.113 [INFO][4374] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" Namespace="calico-system" Pod="csi-node-driver-q27mn" WorkloadEndpoint="localhost-k8s-csi--node--driver--q27mn-eth0" Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.147 [INFO][4414] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" HandleID="k8s-pod-network.f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" Workload="localhost-k8s-csi--node--driver--q27mn-eth0" Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.148 [INFO][4414] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" HandleID="k8s-pod-network.f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" Workload="localhost-k8s-csi--node--driver--q27mn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004ae4e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-q27mn", "timestamp":"2025-07-11 00:22:23.147933819 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.148 [INFO][4414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.171 [INFO][4414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.171 [INFO][4414] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.248 [INFO][4414] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" host="localhost" Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.253 [INFO][4414] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.257 [INFO][4414] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.259 [INFO][4414] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.262 [INFO][4414] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.262 [INFO][4414] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" host="localhost" Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.264 [INFO][4414] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.269 [INFO][4414] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" host="localhost" Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.277 [INFO][4414] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" host="localhost" Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.277 [INFO][4414] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" host="localhost" Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.277 [INFO][4414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:23.298641 env[1316]: 2025-07-11 00:22:23.277 [INFO][4414] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" HandleID="k8s-pod-network.f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" Workload="localhost-k8s-csi--node--driver--q27mn-eth0" Jul 11 00:22:23.299222 env[1316]: 2025-07-11 00:22:23.281 [INFO][4374] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" Namespace="calico-system" Pod="csi-node-driver-q27mn" WorkloadEndpoint="localhost-k8s-csi--node--driver--q27mn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q27mn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4568eb55-c992-4e0f-86d7-395721225945", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-q27mn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib894ea9de32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:23.299222 env[1316]: 2025-07-11 00:22:23.281 [INFO][4374] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" Namespace="calico-system" Pod="csi-node-driver-q27mn" WorkloadEndpoint="localhost-k8s-csi--node--driver--q27mn-eth0" Jul 11 00:22:23.299222 env[1316]: 2025-07-11 00:22:23.281 [INFO][4374] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib894ea9de32 ContainerID="f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" Namespace="calico-system" Pod="csi-node-driver-q27mn" WorkloadEndpoint="localhost-k8s-csi--node--driver--q27mn-eth0" Jul 11 00:22:23.299222 env[1316]: 2025-07-11 00:22:23.287 [INFO][4374] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" Namespace="calico-system" Pod="csi-node-driver-q27mn" WorkloadEndpoint="localhost-k8s-csi--node--driver--q27mn-eth0" Jul 11 00:22:23.299222 env[1316]: 2025-07-11 00:22:23.287 [INFO][4374] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" Namespace="calico-system" Pod="csi-node-driver-q27mn" WorkloadEndpoint="localhost-k8s-csi--node--driver--q27mn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q27mn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4568eb55-c992-4e0f-86d7-395721225945", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b", Pod:"csi-node-driver-q27mn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib894ea9de32", MAC:"d6:9c:c0:f3:97:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:23.299222 env[1316]: 2025-07-11 00:22:23.295 [INFO][4374] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b" Namespace="calico-system" Pod="csi-node-driver-q27mn" WorkloadEndpoint="localhost-k8s-csi--node--driver--q27mn-eth0" Jul 11 00:22:23.308236 env[1316]: time="2025-07-11T00:22:23.308154362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:23.308236 env[1316]: time="2025-07-11T00:22:23.308204602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:23.308403 env[1316]: time="2025-07-11T00:22:23.308215402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:23.308744 env[1316]: time="2025-07-11T00:22:23.308620280Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b pid=4496 runtime=io.containerd.runc.v2 Jul 11 00:22:23.343057 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:23.362282 env[1316]: time="2025-07-11T00:22:23.362224140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q27mn,Uid:4568eb55-c992-4e0f-86d7-395721225945,Namespace:calico-system,Attempt:1,} returns sandbox id \"f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b\"" Jul 11 00:22:23.387506 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali55f974b092d: link becomes ready Jul 11 00:22:23.386856 systemd-networkd[1099]: cali55f974b092d: Link UP Jul 11 00:22:23.387685 systemd-networkd[1099]: cali55f974b092d: Gained carrier Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.110 [INFO][4390] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.129 [INFO][4390] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0 coredns-7c65d6cfc9- kube-system 391acb18-2a72-4443-9d5f-c7fd9457ee12 1014 0 2025-07-11 00:21:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-pzkpx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali55f974b092d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pzkpx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pzkpx-" Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.129 [INFO][4390] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pzkpx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.157 [INFO][4422] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" HandleID="k8s-pod-network.24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" Workload="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.157 [INFO][4422] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" HandleID="k8s-pod-network.24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" Workload="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-pzkpx", "timestamp":"2025-07-11 00:22:23.157714019 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.157 [INFO][4422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.277 [INFO][4422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.278 [INFO][4422] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.350 [INFO][4422] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" host="localhost" Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.354 [INFO][4422] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.368 [INFO][4422] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.369 [INFO][4422] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.372 [INFO][4422] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.372 [INFO][4422] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" host="localhost" Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.373 [INFO][4422] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.376 [INFO][4422] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" host="localhost" Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.381 [INFO][4422] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" host="localhost" Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.381 [INFO][4422] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" host="localhost" Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.381 [INFO][4422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:23.398777 env[1316]: 2025-07-11 00:22:23.382 [INFO][4422] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" HandleID="k8s-pod-network.24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" Workload="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" Jul 11 00:22:23.399396 env[1316]: 2025-07-11 00:22:23.384 [INFO][4390] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pzkpx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"391acb18-2a72-4443-9d5f-c7fd9457ee12", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-pzkpx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55f974b092d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:23.399396 env[1316]: 2025-07-11 00:22:23.384 [INFO][4390] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pzkpx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" Jul 11 00:22:23.399396 env[1316]: 2025-07-11 00:22:23.384 [INFO][4390] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55f974b092d ContainerID="24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pzkpx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" Jul 11 00:22:23.399396 env[1316]: 2025-07-11 00:22:23.386 [INFO][4390] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pzkpx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" Jul 11 00:22:23.399396 env[1316]: 2025-07-11 00:22:23.386 [INFO][4390] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pzkpx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"391acb18-2a72-4443-9d5f-c7fd9457ee12", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f", Pod:"coredns-7c65d6cfc9-pzkpx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55f974b092d", MAC:"a6:62:77:f7:3e:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:23.399396 env[1316]: 2025-07-11 00:22:23.396 [INFO][4390] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pzkpx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" Jul 11 00:22:23.408653 env[1316]: time="2025-07-11T00:22:23.408584150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:23.408820 env[1316]: time="2025-07-11T00:22:23.408788069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:23.408947 env[1316]: time="2025-07-11T00:22:23.408915309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:23.409228 env[1316]: time="2025-07-11T00:22:23.409193748Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f pid=4545 runtime=io.containerd.runc.v2 Jul 11 00:22:23.434657 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:23.452386 env[1316]: time="2025-07-11T00:22:23.452341331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pzkpx,Uid:391acb18-2a72-4443-9d5f-c7fd9457ee12,Namespace:kube-system,Attempt:1,} returns sandbox id \"24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f\"" Jul 11 00:22:23.453031 kubelet[2117]: E0711 00:22:23.453005 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:23.457307 env[1316]: time="2025-07-11T00:22:23.457254191Z" level=info msg="CreateContainer within sandbox \"24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:22:23.468228 env[1316]: time="2025-07-11T00:22:23.468177586Z" level=info msg="CreateContainer within sandbox \"24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a78b372f9324ed3c61e81ff4a8b09d5f476166cb92a287fb783225f07584a998\"" Jul 11 00:22:23.468645 env[1316]: time="2025-07-11T00:22:23.468617744Z" level=info msg="StartContainer for \"a78b372f9324ed3c61e81ff4a8b09d5f476166cb92a287fb783225f07584a998\"" Jul 11 00:22:23.517012 env[1316]: time="2025-07-11T00:22:23.516959066Z" level=info msg="StartContainer for \"a78b372f9324ed3c61e81ff4a8b09d5f476166cb92a287fb783225f07584a998\" returns successfully" Jul 11 00:22:23.881788 env[1316]: time="2025-07-11T00:22:23.881717730Z" level=info msg="StopPodSandbox for \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\"" Jul 11 00:22:23.990078 env[1316]: 2025-07-11 00:22:23.932 [INFO][4627] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Jul 11 00:22:23.990078 env[1316]: 2025-07-11 00:22:23.932 [INFO][4627] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" iface="eth0" netns="/var/run/netns/cni-d79d1148-cb90-4f82-45e2-0b4462466d5f" Jul 11 00:22:23.990078 env[1316]: 2025-07-11 00:22:23.932 [INFO][4627] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" iface="eth0" netns="/var/run/netns/cni-d79d1148-cb90-4f82-45e2-0b4462466d5f" Jul 11 00:22:23.990078 env[1316]: 2025-07-11 00:22:23.932 [INFO][4627] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" iface="eth0" netns="/var/run/netns/cni-d79d1148-cb90-4f82-45e2-0b4462466d5f" Jul 11 00:22:23.990078 env[1316]: 2025-07-11 00:22:23.932 [INFO][4627] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Jul 11 00:22:23.990078 env[1316]: 2025-07-11 00:22:23.932 [INFO][4627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Jul 11 00:22:23.990078 env[1316]: 2025-07-11 00:22:23.966 [INFO][4642] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" HandleID="k8s-pod-network.731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Workload="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" Jul 11 00:22:23.990078 env[1316]: 2025-07-11 00:22:23.967 [INFO][4642] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:23.990078 env[1316]: 2025-07-11 00:22:23.967 [INFO][4642] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:23.990078 env[1316]: 2025-07-11 00:22:23.980 [WARNING][4642] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" HandleID="k8s-pod-network.731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Workload="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" Jul 11 00:22:23.990078 env[1316]: 2025-07-11 00:22:23.980 [INFO][4642] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" HandleID="k8s-pod-network.731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Workload="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" Jul 11 00:22:23.990078 env[1316]: 2025-07-11 00:22:23.982 [INFO][4642] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:23.990078 env[1316]: 2025-07-11 00:22:23.988 [INFO][4627] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Jul 11 00:22:23.990575 env[1316]: time="2025-07-11T00:22:23.990226885Z" level=info msg="TearDown network for sandbox \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\" successfully" Jul 11 00:22:23.990575 env[1316]: time="2025-07-11T00:22:23.990259365Z" level=info msg="StopPodSandbox for \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\" returns successfully" Jul 11 00:22:23.990851 env[1316]: time="2025-07-11T00:22:23.990822803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f89b684d9-vbwhc,Uid:149173fd-5331-45f1-97cd-d2699b6084a9,Namespace:calico-system,Attempt:1,}" Jul 11 00:22:24.014131 kubelet[2117]: I0711 00:22:24.014092 2117 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:22:24.015541 kubelet[2117]: E0711 00:22:24.015470 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:24.026214 systemd[1]: run-netns-cni\x2dd79d1148\x2dcb90\x2d4f82\x2d45e2\x2d0b4462466d5f.mount: Deactivated successfully. Jul 11 00:22:24.026393 systemd[1]: run-netns-cni\x2dfab3f626\x2d8ea4\x2da375\x2de50e\x2dfce9151abb72.mount: Deactivated successfully. Jul 11 00:22:24.040931 kubelet[2117]: I0711 00:22:24.040799 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-pzkpx" podStartSLOduration=37.040782402 podStartE2EDuration="37.040782402s" podCreationTimestamp="2025-07-11 00:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:22:24.040046285 +0000 UTC m=+44.270503410" watchObservedRunningTime="2025-07-11 00:22:24.040782402 +0000 UTC m=+44.271239527" Jul 11 00:22:24.122000 audit[4678]: NETFILTER_CFG table=filter:107 family=2 entries=14 op=nft_register_rule pid=4678 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:24.122000 audit[4678]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffebf6a410 a2=0 a3=1 items=0 ppid=2270 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:24.122000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:24.130000 audit[4678]: NETFILTER_CFG table=nat:108 family=2 entries=44 op=nft_register_rule pid=4678 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:24.130000 audit[4678]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffebf6a410 a2=0 a3=1 items=0 ppid=2270 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:24.130000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:24.147000 audit[4690]: NETFILTER_CFG table=filter:109 family=2 entries=14 op=nft_register_rule pid=4690 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:24.147000 audit[4690]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff5fc96a0 a2=0 a3=1 items=0 ppid=2270 pid=4690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:24.147000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:24.169000 audit[4690]: NETFILTER_CFG table=nat:110 family=2 entries=56 op=nft_register_chain pid=4690 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:24.169000 audit[4690]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffff5fc96a0 a2=0 a3=1 items=0 ppid=2270 pid=4690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:24.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:24.212652 systemd-networkd[1099]: cali6a5bc5932d2: Link UP Jul 11 00:22:24.215031 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 11 00:22:24.215114 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6a5bc5932d2: link becomes ready Jul 11 00:22:24.215058 systemd-networkd[1099]: cali6a5bc5932d2: Gained carrier Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.109 [INFO][4663] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.133 [INFO][4663] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0 calico-kube-controllers-7f89b684d9- calico-system 149173fd-5331-45f1-97cd-d2699b6084a9 1046 0 2025-07-11 00:22:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f89b684d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7f89b684d9-vbwhc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6a5bc5932d2 [] [] }} ContainerID="6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" Namespace="calico-system" Pod="calico-kube-controllers-7f89b684d9-vbwhc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-" Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.134 [INFO][4663] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" Namespace="calico-system" Pod="calico-kube-controllers-7f89b684d9-vbwhc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.167 [INFO][4680] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" HandleID="k8s-pod-network.6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" Workload="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.168 [INFO][4680] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" HandleID="k8s-pod-network.6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" Workload="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035cfe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7f89b684d9-vbwhc", "timestamp":"2025-07-11 00:22:24.167596776 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.168 [INFO][4680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.168 [INFO][4680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.168 [INFO][4680] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.181 [INFO][4680] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" host="localhost" Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.189 [INFO][4680] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.193 [INFO][4680] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.195 [INFO][4680] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.197 [INFO][4680] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.197 [INFO][4680] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" host="localhost" Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.198 [INFO][4680] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6 Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.202 [INFO][4680] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" host="localhost" Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.208 [INFO][4680] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" host="localhost" Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.208 [INFO][4680] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" host="localhost" Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.208 [INFO][4680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:24.227616 env[1316]: 2025-07-11 00:22:24.208 [INFO][4680] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" HandleID="k8s-pod-network.6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" Workload="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" Jul 11 00:22:24.228196 env[1316]: 2025-07-11 00:22:24.210 [INFO][4663] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" Namespace="calico-system" Pod="calico-kube-controllers-7f89b684d9-vbwhc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0", GenerateName:"calico-kube-controllers-7f89b684d9-", Namespace:"calico-system", SelfLink:"", UID:"149173fd-5331-45f1-97cd-d2699b6084a9", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f89b684d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7f89b684d9-vbwhc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a5bc5932d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:24.228196 env[1316]: 2025-07-11 00:22:24.210 [INFO][4663] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" Namespace="calico-system" Pod="calico-kube-controllers-7f89b684d9-vbwhc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" Jul 11 00:22:24.228196 env[1316]: 2025-07-11 00:22:24.210 [INFO][4663] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a5bc5932d2 ContainerID="6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" Namespace="calico-system" Pod="calico-kube-controllers-7f89b684d9-vbwhc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" Jul 11 00:22:24.228196 env[1316]: 2025-07-11 00:22:24.215 [INFO][4663] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" Namespace="calico-system" Pod="calico-kube-controllers-7f89b684d9-vbwhc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" Jul 11 00:22:24.228196 env[1316]: 2025-07-11 00:22:24.216 [INFO][4663] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" Namespace="calico-system" Pod="calico-kube-controllers-7f89b684d9-vbwhc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0", GenerateName:"calico-kube-controllers-7f89b684d9-", Namespace:"calico-system", SelfLink:"", UID:"149173fd-5331-45f1-97cd-d2699b6084a9", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f89b684d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6", Pod:"calico-kube-controllers-7f89b684d9-vbwhc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a5bc5932d2", MAC:"5a:18:2a:f1:1d:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:24.228196 env[1316]: 2025-07-11 00:22:24.224 [INFO][4663] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6" Namespace="calico-system" Pod="calico-kube-controllers-7f89b684d9-vbwhc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" Jul 11 00:22:24.236548 env[1316]: time="2025-07-11T00:22:24.236475422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:24.236709 env[1316]: time="2025-07-11T00:22:24.236684541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:24.236790 env[1316]: time="2025-07-11T00:22:24.236769020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:24.237095 env[1316]: time="2025-07-11T00:22:24.237062179Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6 pid=4707 runtime=io.containerd.runc.v2 Jul 11 00:22:24.266706 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:24.284660 env[1316]: time="2025-07-11T00:22:24.284615710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f89b684d9-vbwhc,Uid:149173fd-5331-45f1-97cd-d2699b6084a9,Namespace:calico-system,Attempt:1,} returns sandbox id \"6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6\"" Jul 11 00:22:24.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.33:22-10.0.0.1:51516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:24.391667 systemd[1]: Started sshd@8-10.0.0.33:22-10.0.0.1:51516.service. Jul 11 00:22:24.437000 audit[4742]: USER_ACCT pid=4742 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:24.438806 sshd[4742]: Accepted publickey for core from 10.0.0.1 port 51516 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:22:24.439000 audit[4742]: CRED_ACQ pid=4742 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:24.439000 audit[4742]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd470b830 a2=3 a3=1 items=0 ppid=1 pid=4742 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:24.439000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:24.440695 sshd[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:22:24.444622 systemd-logind[1302]: New session 9 of user core. Jul 11 00:22:24.445039 systemd[1]: Started session-9.scope. Jul 11 00:22:24.448000 audit[4742]: USER_START pid=4742 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:24.449000 audit[4745]: CRED_ACQ pid=4745 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:24.603504 sshd[4742]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:24.603000 audit[4742]: USER_END pid=4742 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:24.603000 audit[4742]: CRED_DISP pid=4742 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:24.606481 systemd[1]: sshd@8-10.0.0.33:22-10.0.0.1:51516.service: Deactivated successfully. Jul 11 00:22:24.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.33:22-10.0.0.1:51516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:24.607512 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:22:24.607718 systemd-logind[1302]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:22:24.608733 systemd-logind[1302]: Removed session 9. Jul 11 00:22:24.826029 systemd-networkd[1099]: calie7b7ab47d90: Gained IPv6LL Jul 11 00:22:25.019453 kubelet[2117]: E0711 00:22:25.019417 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:25.031453 env[1316]: time="2025-07-11T00:22:25.031404693Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:25.033461 env[1316]: time="2025-07-11T00:22:25.033422165Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:25.035246 env[1316]: time="2025-07-11T00:22:25.035218438Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:25.037261 env[1316]: time="2025-07-11T00:22:25.037221390Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:25.037857 env[1316]: time="2025-07-11T00:22:25.037826668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 11 00:22:25.039587 env[1316]: time="2025-07-11T00:22:25.039324262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:22:25.040388 env[1316]: time="2025-07-11T00:22:25.040353578Z" level=info msg="CreateContainer within sandbox \"921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:22:25.058813 env[1316]: time="2025-07-11T00:22:25.058768107Z" level=info msg="CreateContainer within sandbox \"921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"580eaef93180edc746fd645e752ff065e8aa1607780188af3afc7a0d514dee4a\"" Jul 11 00:22:25.060488 env[1316]: time="2025-07-11T00:22:25.060431860Z" level=info msg="StartContainer for \"580eaef93180edc746fd645e752ff065e8aa1607780188af3afc7a0d514dee4a\"" Jul 11 00:22:25.106976 kubelet[2117]: I0711 00:22:25.106762 2117 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:22:25.144363 env[1316]: time="2025-07-11T00:22:25.144317295Z" level=info msg="StartContainer for \"580eaef93180edc746fd645e752ff065e8aa1607780188af3afc7a0d514dee4a\" returns successfully" Jul 11 00:22:25.210014 systemd-networkd[1099]: calib894ea9de32: Gained IPv6LL Jul 11 00:22:25.274008 systemd-networkd[1099]: cali6a5bc5932d2: Gained IPv6LL Jul 11 00:22:25.288608 env[1316]: time="2025-07-11T00:22:25.288558055Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:25.292831 env[1316]: time="2025-07-11T00:22:25.292782318Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:25.295117 env[1316]: time="2025-07-11T00:22:25.295080109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:25.297039 env[1316]: time="2025-07-11T00:22:25.297009382Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:25.297580 env[1316]: time="2025-07-11T00:22:25.297553980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 11 00:22:25.301685 env[1316]: time="2025-07-11T00:22:25.300773727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 11 00:22:25.303692 env[1316]: time="2025-07-11T00:22:25.303653076Z" level=info msg="CreateContainer within sandbox \"456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:22:25.315600 env[1316]: time="2025-07-11T00:22:25.315556510Z" level=info msg="CreateContainer within sandbox \"456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6cb39d32ed82f31071732e14e9769c3d5a27c070ab6b42416bae3eaf067e4117\"" Jul 11 00:22:25.316268 env[1316]: time="2025-07-11T00:22:25.316236107Z" level=info msg="StartContainer for \"6cb39d32ed82f31071732e14e9769c3d5a27c070ab6b42416bae3eaf067e4117\"" Jul 11 00:22:25.394676 env[1316]: time="2025-07-11T00:22:25.394573043Z" level=info msg="StartContainer for \"6cb39d32ed82f31071732e14e9769c3d5a27c070ab6b42416bae3eaf067e4117\" returns successfully" Jul 11 00:22:25.402042 systemd-networkd[1099]: cali55f974b092d: Gained IPv6LL Jul 11 00:22:26.025664 kubelet[2117]: E0711 00:22:26.025631 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:26.048496 kubelet[2117]: I0711 00:22:26.048427 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f447458f6-544gl" podStartSLOduration=28.033242556 podStartE2EDuration="30.048400589s" podCreationTimestamp="2025-07-11 00:21:56 +0000 UTC" firstStartedPulling="2025-07-11 00:22:23.283658662 +0000 UTC m=+43.514115747" lastFinishedPulling="2025-07-11 00:22:25.298816655 +0000 UTC m=+45.529273780" observedRunningTime="2025-07-11 00:22:26.04800283 +0000 UTC m=+46.278459955" watchObservedRunningTime="2025-07-11 00:22:26.048400589 +0000 UTC m=+46.278857714" Jul 11 00:22:26.048686 kubelet[2117]: I0711 00:22:26.048616 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f447458f6-qfmmn" podStartSLOduration=26.201802864 podStartE2EDuration="30.048611228s" podCreationTimestamp="2025-07-11 00:21:56 +0000 UTC" firstStartedPulling="2025-07-11 00:22:21.192298019 +0000 UTC m=+41.422755144" lastFinishedPulling="2025-07-11 00:22:25.039106383 +0000 UTC m=+45.269563508" observedRunningTime="2025-07-11 00:22:26.037349231 +0000 UTC m=+46.267806356" watchObservedRunningTime="2025-07-11 00:22:26.048611228 +0000 UTC m=+46.279068313" Jul 11 00:22:26.056000 audit[4901]: NETFILTER_CFG table=filter:111 family=2 entries=14 op=nft_register_rule pid=4901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:26.059607 kernel: kauditd_printk_skb: 31 callbacks suppressed Jul 11 00:22:26.059663 kernel: audit: type=1325 audit(1752193346.056:330): table=filter:111 family=2 entries=14 op=nft_register_rule pid=4901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:26.056000 audit[4901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc8b1b470 a2=0 a3=1 items=0 ppid=2270 pid=4901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:26.063581 kernel: audit: type=1300 audit(1752193346.056:330): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc8b1b470 a2=0 a3=1 items=0 ppid=2270 pid=4901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:26.056000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:26.065681 kernel: audit: type=1327 audit(1752193346.056:330): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:26.066000 audit[4901]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=4901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:26.066000 audit[4901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc8b1b470 a2=0 a3=1 items=0 ppid=2270 pid=4901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:26.073683 kernel: audit: type=1325 audit(1752193346.066:331): table=nat:112 family=2 entries=20 op=nft_register_rule pid=4901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:26.073742 kernel: audit: type=1300 audit(1752193346.066:331): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc8b1b470 a2=0 a3=1 items=0 ppid=2270 pid=4901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:26.073772 kernel: audit: type=1327 audit(1752193346.066:331): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:26.066000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:26.080000 audit[4903]: NETFILTER_CFG table=filter:113 family=2 entries=14 op=nft_register_rule pid=4903 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:26.080000 audit[4903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffedaf1210 a2=0 a3=1 items=0 ppid=2270 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:26.087780 kernel: audit: type=1325 audit(1752193346.080:332): table=filter:113 family=2 entries=14 op=nft_register_rule pid=4903 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:26.087826 kernel: audit: type=1300 audit(1752193346.080:332): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffedaf1210 a2=0 a3=1 items=0 ppid=2270 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:26.087861 kernel: audit: type=1327 audit(1752193346.080:332): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:26.080000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:26.091000 audit[4903]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=4903 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:26.091000 audit[4903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffedaf1210 a2=0 a3=1 items=0 ppid=2270 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:26.091000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:26.097893 kernel: audit: type=1325 audit(1752193346.091:333): table=nat:114 family=2 entries=20 op=nft_register_rule pid=4903 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:26.526450 env[1316]: time="2025-07-11T00:22:26.526404622Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:26.530430 env[1316]: time="2025-07-11T00:22:26.530392447Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:26.533375 env[1316]: time="2025-07-11T00:22:26.533336396Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:26.535848 env[1316]: time="2025-07-11T00:22:26.535799427Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:26.536617 env[1316]: time="2025-07-11T00:22:26.536586464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 11 00:22:26.538634 env[1316]: time="2025-07-11T00:22:26.538608616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 11 00:22:26.539966 env[1316]: time="2025-07-11T00:22:26.539933091Z" level=info msg="CreateContainer within sandbox \"f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 11 00:22:26.558766 env[1316]: time="2025-07-11T00:22:26.558716780Z" level=info msg="CreateContainer within sandbox \"f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"34650eb4f03058e89d7c275c85859d507cfa204aa4d5770d26b3f77d365c5cf7\"" Jul 11 00:22:26.559630 env[1316]: time="2025-07-11T00:22:26.559605137Z" level=info msg="StartContainer for \"34650eb4f03058e89d7c275c85859d507cfa204aa4d5770d26b3f77d365c5cf7\"" Jul 11 00:22:26.729840 env[1316]: time="2025-07-11T00:22:26.729791934Z" level=info msg="StartContainer for \"34650eb4f03058e89d7c275c85859d507cfa204aa4d5770d26b3f77d365c5cf7\" returns successfully" Jul 11 00:22:27.033675 kubelet[2117]: I0711 00:22:27.033326 2117 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:22:27.457000 audit[4980]: NETFILTER_CFG table=filter:115 family=2 entries=13 op=nft_register_rule pid=4980 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:27.457000 audit[4980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=fffff2875390 a2=0 a3=1 items=0 ppid=2270 pid=4980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:27.457000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:27.463000 audit[4980]: NETFILTER_CFG table=nat:116 family=2 entries=27 op=nft_register_chain pid=4980 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:27.463000 audit[4980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=fffff2875390 a2=0 a3=1 items=0 ppid=2270 pid=4980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:27.463000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:28.898899 env[1316]: time="2025-07-11T00:22:28.898838854Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:28.900456 env[1316]: time="2025-07-11T00:22:28.900418288Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:28.902144 env[1316]: time="2025-07-11T00:22:28.902120242Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:28.903421 env[1316]: time="2025-07-11T00:22:28.903392517Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:28.903977 env[1316]: time="2025-07-11T00:22:28.903953515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 11 00:22:28.906599 env[1316]: time="2025-07-11T00:22:28.906022868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 11 00:22:28.917700 env[1316]: time="2025-07-11T00:22:28.917632586Z" level=info msg="CreateContainer within sandbox \"6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 11 00:22:28.945657 env[1316]: time="2025-07-11T00:22:28.945614486Z" level=info msg="CreateContainer within sandbox \"6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9e65a0ad2cbe2d50f12728e098e802362b0e92ff1e19852bd8c09a93be64b3c2\"" Jul 11 00:22:28.946512 env[1316]: time="2025-07-11T00:22:28.946435963Z" level=info msg="StartContainer for \"9e65a0ad2cbe2d50f12728e098e802362b0e92ff1e19852bd8c09a93be64b3c2\"" Jul 11 00:22:29.009434 env[1316]: time="2025-07-11T00:22:29.009381698Z" level=info msg="StartContainer for \"9e65a0ad2cbe2d50f12728e098e802362b0e92ff1e19852bd8c09a93be64b3c2\" returns successfully" Jul 11 00:22:29.050926 kubelet[2117]: I0711 00:22:29.050843 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f89b684d9-vbwhc" podStartSLOduration=24.431296988 podStartE2EDuration="29.050825834s" podCreationTimestamp="2025-07-11 00:22:00 +0000 UTC" firstStartedPulling="2025-07-11 00:22:24.285730825 +0000 UTC m=+44.516187950" lastFinishedPulling="2025-07-11 00:22:28.905259671 +0000 UTC m=+49.135716796" observedRunningTime="2025-07-11 00:22:29.050124036 +0000 UTC m=+49.280581161" watchObservedRunningTime="2025-07-11 00:22:29.050825834 +0000 UTC m=+49.281282959" Jul 11 00:22:29.516706 kubelet[2117]: I0711 00:22:29.516659 2117 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:22:29.517097 kubelet[2117]: E0711 00:22:29.517043 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:29.547000 audit[5091]: NETFILTER_CFG table=filter:117 family=2 entries=11 op=nft_register_rule pid=5091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:29.547000 audit[5091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=fffffd45a1d0 a2=0 a3=1 items=0 ppid=2270 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.547000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:29.553000 audit[5091]: NETFILTER_CFG table=nat:118 family=2 entries=29 op=nft_register_chain pid=5091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:29.553000 audit[5091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=fffffd45a1d0 a2=0 a3=1 items=0 ppid=2270 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.553000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:29.606858 systemd[1]: Started sshd@9-10.0.0.33:22-10.0.0.1:51530.service. Jul 11 00:22:29.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.33:22-10.0.0.1:51530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:29.659348 sshd[5098]: Accepted publickey for core from 10.0.0.1 port 51530 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:22:29.658000 audit[5098]: USER_ACCT pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:29.659000 audit[5098]: CRED_ACQ pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:29.659000 audit[5098]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc882b890 a2=3 a3=1 items=0 ppid=1 pid=5098 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.659000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:29.661009 sshd[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:22:29.667880 systemd-logind[1302]: New session 10 of user core. Jul 11 00:22:29.668030 systemd[1]: Started session-10.scope. Jul 11 00:22:29.683000 audit[5098]: USER_START pid=5098 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:29.684000 audit[5117]: CRED_ACQ pid=5117 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { bpf } for pid=5152 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { bpf } for pid=5152 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { perfmon } for pid=5152 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { perfmon } for pid=5152 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { perfmon } for pid=5152 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { perfmon } for pid=5152 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { perfmon } for pid=5152 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { bpf } for pid=5152 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { bpf } for pid=5152 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit: BPF prog-id=10 op=LOAD Jul 11 00:22:29.846000 audit[5152]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd38c3ab8 a2=98 a3=ffffd38c3aa8 items=0 ppid=5107 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.846000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 11 00:22:29.846000 audit: BPF prog-id=10 op=UNLOAD Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { bpf } for pid=5152 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { bpf } for pid=5152 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { perfmon } for pid=5152 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { perfmon } for pid=5152 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { perfmon } for pid=5152 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { perfmon } for pid=5152 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { perfmon } for pid=5152 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { bpf } for pid=5152 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit[5152]: AVC avc: denied { bpf } for pid=5152 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.846000 audit: BPF prog-id=11 op=LOAD Jul 11 00:22:29.846000 audit[5152]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd38c3968 a2=74 a3=95 items=0 ppid=5107 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.846000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 11 00:22:29.847000 audit: BPF prog-id=11 op=UNLOAD Jul 11 00:22:29.847000 audit[5152]: AVC avc: denied { bpf } for pid=5152 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.847000 audit[5152]: AVC avc: denied { bpf } for pid=5152 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.847000 audit[5152]: AVC avc: denied { perfmon } for pid=5152 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.847000 audit[5152]: AVC avc: denied { perfmon } for pid=5152 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.847000 audit[5152]: AVC avc: denied { perfmon } for pid=5152 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.847000 audit[5152]: AVC avc: denied { perfmon } for pid=5152 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.847000 audit[5152]: AVC avc: denied { perfmon } for pid=5152 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.847000 audit[5152]: AVC avc: denied { bpf } for pid=5152 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.847000 audit[5152]: AVC avc: denied { bpf } for pid=5152 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.847000 audit: BPF prog-id=12 op=LOAD Jul 11 00:22:29.847000 audit[5152]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd38c3998 a2=40 a3=ffffd38c39c8 items=0 ppid=5107 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.847000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 11 00:22:29.847000 audit: BPF prog-id=12 op=UNLOAD Jul 11 00:22:29.847000 audit[5152]: AVC avc: denied { perfmon } for pid=5152 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.847000 audit[5152]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffd38c3ab0 a2=50 a3=0 items=0 ppid=5107 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.847000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit: BPF prog-id=13 op=LOAD Jul 11 00:22:29.853000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe896aa68 a2=98 a3=ffffe896aa58 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.853000 audit: BPF prog-id=13 op=UNLOAD Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit: BPF prog-id=14 op=LOAD Jul 11 00:22:29.853000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe896a6f8 a2=74 a3=95 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.853000 audit: BPF prog-id=14 op=UNLOAD Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.853000 audit: BPF prog-id=15 op=LOAD Jul 11 00:22:29.853000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe896a758 a2=94 a3=2 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.853000 audit: BPF prog-id=15 op=UNLOAD Jul 11 00:22:29.954000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.954000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.954000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.954000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.954000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.954000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.954000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.954000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.954000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.954000 audit: BPF prog-id=16 op=LOAD Jul 11 00:22:29.954000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe896a718 a2=40 a3=ffffe896a748 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.954000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.955000 audit: BPF prog-id=16 op=UNLOAD Jul 11 00:22:29.955000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.955000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffe896a830 a2=50 a3=0 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.955000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.963000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.963000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe896a788 a2=28 a3=ffffe896a8b8 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.963000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.963000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.963000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe896a7b8 a2=28 a3=ffffe896a8e8 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.963000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.963000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.963000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe896a668 a2=28 a3=ffffe896a798 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.963000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.963000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.963000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe896a7d8 a2=28 a3=ffffe896a908 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.963000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.963000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.963000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe896a7b8 a2=28 a3=ffffe896a8e8 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.963000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.963000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.963000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe896a7a8 a2=28 a3=ffffe896a8d8 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.963000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.963000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.963000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe896a7d8 a2=28 a3=ffffe896a908 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.963000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.963000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.963000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe896a7b8 a2=28 a3=ffffe896a8e8 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.963000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.963000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.963000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe896a7d8 a2=28 a3=ffffe896a908 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.963000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.963000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.963000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe896a7a8 a2=28 a3=ffffe896a8d8 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.963000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.963000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.963000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe896a828 a2=28 a3=ffffe896a968 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.963000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffe896a560 a2=50 a3=0 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.964000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit: BPF prog-id=17 op=LOAD Jul 11 00:22:29.964000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe896a568 a2=94 a3=5 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.964000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.964000 audit: BPF prog-id=17 op=UNLOAD Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffe896a670 a2=50 a3=0 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.964000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffe896a7b8 a2=4 a3=3 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.964000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { confidentiality } for pid=5153 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 11 00:22:29.964000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffe896a798 a2=94 a3=6 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.964000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { confidentiality } for pid=5153 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 11 00:22:29.964000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffe8969f68 a2=94 a3=83 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.964000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { perfmon } for pid=5153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { bpf } for pid=5153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.964000 audit[5153]: AVC avc: denied { confidentiality } for pid=5153 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 11 00:22:29.964000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffe8969f68 a2=94 a3=83 items=0 ppid=5107 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.964000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { bpf } for pid=5157 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { bpf } for pid=5157 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { perfmon } for pid=5157 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { perfmon } for pid=5157 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { perfmon } for pid=5157 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { perfmon } for pid=5157 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { perfmon } for pid=5157 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { bpf } for pid=5157 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { bpf } for pid=5157 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit: BPF prog-id=18 op=LOAD Jul 11 00:22:29.985000 audit[5157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd098d5e8 a2=98 a3=ffffd098d5d8 items=0 ppid=5107 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.985000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 11 00:22:29.985000 audit: BPF prog-id=18 op=UNLOAD Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { bpf } for pid=5157 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { bpf } for pid=5157 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { perfmon } for pid=5157 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { perfmon } for pid=5157 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { perfmon } for pid=5157 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { perfmon } for pid=5157 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { perfmon } for pid=5157 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { bpf } for pid=5157 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { bpf } for pid=5157 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit: BPF prog-id=19 op=LOAD Jul 11 00:22:29.985000 audit[5157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd098d498 a2=74 a3=95 items=0 ppid=5107 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.985000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 11 00:22:29.985000 audit: BPF prog-id=19 op=UNLOAD Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { bpf } for pid=5157 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { bpf } for pid=5157 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { perfmon } for pid=5157 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { perfmon } for pid=5157 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { perfmon } for pid=5157 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { perfmon } for pid=5157 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { perfmon } for pid=5157 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { bpf } for pid=5157 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit[5157]: AVC avc: denied { bpf } for pid=5157 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:29.985000 audit: BPF prog-id=20 op=LOAD Jul 11 00:22:29.985000 audit[5157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd098d4c8 a2=40 a3=ffffd098d4f8 items=0 ppid=5107 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:29.985000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 11 00:22:29.985000 audit: BPF prog-id=20 op=UNLOAD Jul 11 00:22:30.043486 kubelet[2117]: E0711 00:22:30.042277 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:30.085139 systemd[1]: run-containerd-runc-k8s.io-9e65a0ad2cbe2d50f12728e098e802362b0e92ff1e19852bd8c09a93be64b3c2-runc.EnxJ9N.mount: Deactivated successfully. Jul 11 00:22:30.095490 systemd-networkd[1099]: vxlan.calico: Link UP Jul 11 00:22:30.095503 systemd-networkd[1099]: vxlan.calico: Gained carrier Jul 11 00:22:30.176000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.176000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.176000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.176000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.176000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.176000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.176000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.176000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.176000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.176000 audit: BPF prog-id=21 op=LOAD Jul 11 00:22:30.176000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdd32e1d8 a2=98 a3=ffffdd32e1c8 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.176000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.176000 audit: BPF prog-id=21 op=UNLOAD Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit: BPF prog-id=22 op=LOAD Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdd32deb8 a2=74 a3=95 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.177000 audit: BPF prog-id=22 op=UNLOAD Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit: BPF prog-id=23 op=LOAD Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdd32df18 a2=94 a3=2 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.177000 audit: BPF prog-id=23 op=UNLOAD Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffdd32df48 a2=28 a3=ffffdd32e078 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffdd32df78 a2=28 a3=ffffdd32e0a8 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffdd32de28 a2=28 a3=ffffdd32df58 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffdd32df98 a2=28 a3=ffffdd32e0c8 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffdd32df78 a2=28 a3=ffffdd32e0a8 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffdd32df68 a2=28 a3=ffffdd32e098 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffdd32df98 a2=28 a3=ffffdd32e0c8 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffdd32df78 a2=28 a3=ffffdd32e0a8 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffdd32df98 a2=28 a3=ffffdd32e0c8 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffdd32df68 a2=28 a3=ffffdd32e098 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffdd32dfe8 a2=28 a3=ffffdd32e128 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit: BPF prog-id=24 op=LOAD Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffdd32de08 a2=40 a3=ffffdd32de38 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.177000 audit: BPF prog-id=24 op=UNLOAD Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffdd32de30 a2=50 a3=0 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffdd32de30 a2=50 a3=0 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit: BPF prog-id=25 op=LOAD Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffdd32d598 a2=94 a3=2 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.177000 audit: BPF prog-id=25 op=UNLOAD Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { perfmon } for pid=5204 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit[5204]: AVC avc: denied { bpf } for pid=5204 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.177000 audit: BPF prog-id=26 op=LOAD Jul 11 00:22:30.177000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffdd32d728 a2=94 a3=30 items=0 ppid=5107 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit: BPF prog-id=27 op=LOAD Jul 11 00:22:30.189000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffbe93d88 a2=98 a3=fffffbe93d78 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.189000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.189000 audit: BPF prog-id=27 op=UNLOAD Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit: BPF prog-id=28 op=LOAD Jul 11 00:22:30.189000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffbe93a18 a2=74 a3=95 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.189000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.189000 audit: BPF prog-id=28 op=UNLOAD Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.189000 audit: BPF prog-id=29 op=LOAD Jul 11 00:22:30.189000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffbe93a78 a2=94 a3=2 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.189000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.189000 audit: BPF prog-id=29 op=UNLOAD Jul 11 00:22:30.205177 sshd[5098]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:30.205000 audit[5098]: USER_END pid=5098 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:30.206000 audit[5098]: CRED_DISP pid=5098 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:30.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.33:22-10.0.0.1:51542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:30.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.33:22-10.0.0.1:51530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:30.207572 systemd[1]: Started sshd@10-10.0.0.33:22-10.0.0.1:51542.service. Jul 11 00:22:30.208363 systemd[1]: sshd@9-10.0.0.33:22-10.0.0.1:51530.service: Deactivated successfully. Jul 11 00:22:30.209524 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:22:30.212870 systemd-logind[1302]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:22:30.224931 systemd-logind[1302]: Removed session 10. Jul 11 00:22:30.262000 audit[5214]: USER_ACCT pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:30.263282 sshd[5214]: Accepted publickey for core from 10.0.0.1 port 51542 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:22:30.263000 audit[5214]: CRED_ACQ pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:30.263000 audit[5214]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc73f8c80 a2=3 a3=1 items=0 ppid=1 pid=5214 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.263000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:30.264798 sshd[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:22:30.268674 systemd-logind[1302]: New session 11 of user core. Jul 11 00:22:30.269475 systemd[1]: Started session-11.scope. Jul 11 00:22:30.272000 audit[5214]: USER_START pid=5214 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:30.274000 audit[5219]: CRED_ACQ pid=5219 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:30.287000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.287000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.287000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.287000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.287000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.287000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.287000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.287000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.287000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.287000 audit: BPF prog-id=30 op=LOAD Jul 11 00:22:30.287000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffbe93a38 a2=40 a3=fffffbe93a68 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.287000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.287000 audit: BPF prog-id=30 op=UNLOAD Jul 11 00:22:30.287000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.287000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=fffffbe93b50 a2=50 a3=0 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.287000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffbe93aa8 a2=28 a3=fffffbe93bd8 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffbe93ad8 a2=28 a3=fffffbe93c08 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffbe93988 a2=28 a3=fffffbe93ab8 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffbe93af8 a2=28 a3=fffffbe93c28 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffbe93ad8 a2=28 a3=fffffbe93c08 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffbe93ac8 a2=28 a3=fffffbe93bf8 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffbe93af8 a2=28 a3=fffffbe93c28 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffbe93ad8 a2=28 a3=fffffbe93c08 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffbe93af8 a2=28 a3=fffffbe93c28 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffbe93ac8 a2=28 a3=fffffbe93bf8 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffbe93b48 a2=28 a3=fffffbe93c88 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffffbe93880 a2=50 a3=0 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit: BPF prog-id=31 op=LOAD Jul 11 00:22:30.296000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffbe93888 a2=94 a3=5 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.296000 audit: BPF prog-id=31 op=UNLOAD Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffffbe93990 a2=50 a3=0 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=fffffbe93ad8 a2=4 a3=3 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { confidentiality } for pid=5213 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 11 00:22:30.296000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffffbe93ab8 a2=94 a3=6 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.296000 audit[5213]: AVC avc: denied { confidentiality } for pid=5213 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 11 00:22:30.296000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffffbe93288 a2=94 a3=83 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.297000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.297000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.297000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.297000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.297000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.297000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.297000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.297000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.297000 audit[5213]: AVC avc: denied { perfmon } for pid=5213 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.297000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.297000 audit[5213]: AVC avc: denied { confidentiality } for pid=5213 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 11 00:22:30.297000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffffbe93288 a2=94 a3=83 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.297000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.297000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.297000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffffbe94cc8 a2=10 a3=fffffbe94db8 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.297000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.297000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.297000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffffbe94b88 a2=10 a3=fffffbe94c78 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.297000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.297000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.297000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffffbe94af8 a2=10 a3=fffffbe94c78 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.297000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.297000 audit[5213]: AVC avc: denied { bpf } for pid=5213 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 11 00:22:30.297000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffffbe94af8 a2=10 a3=fffffbe94c78 items=0 ppid=5107 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.297000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 11 00:22:30.303832 env[1316]: time="2025-07-11T00:22:30.303785931Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:30.304000 audit: BPF prog-id=26 op=UNLOAD Jul 11 00:22:30.305671 env[1316]: time="2025-07-11T00:22:30.305627245Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:30.306965 env[1316]: time="2025-07-11T00:22:30.306935401Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:30.308322 env[1316]: time="2025-07-11T00:22:30.308292556Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:22:30.308796 env[1316]: time="2025-07-11T00:22:30.308753035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 11 00:22:30.311533 env[1316]: time="2025-07-11T00:22:30.311490945Z" level=info msg="CreateContainer within sandbox \"f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 11 00:22:30.330950 env[1316]: time="2025-07-11T00:22:30.330874039Z" level=info msg="CreateContainer within sandbox \"f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"29b9e52d8845fbeef140ac5e56f2863addda55a41cef485b04555b07e1c688b8\"" Jul 11 00:22:30.335165 env[1316]: time="2025-07-11T00:22:30.334719066Z" level=info msg="StartContainer for \"29b9e52d8845fbeef140ac5e56f2863addda55a41cef485b04555b07e1c688b8\"" Jul 11 00:22:30.456682 env[1316]: time="2025-07-11T00:22:30.456271094Z" level=info msg="StartContainer for \"29b9e52d8845fbeef140ac5e56f2863addda55a41cef485b04555b07e1c688b8\" returns successfully" Jul 11 00:22:30.490000 audit[5301]: NETFILTER_CFG table=mangle:119 family=2 entries=16 op=nft_register_chain pid=5301 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 11 00:22:30.490000 audit[5301]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffd04e2770 a2=0 a3=ffffa48e9fa8 items=0 ppid=5107 pid=5301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.490000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 11 00:22:30.496000 audit[5300]: NETFILTER_CFG table=nat:120 family=2 entries=15 op=nft_register_chain pid=5300 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 11 00:22:30.496000 audit[5300]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffce967d50 a2=0 a3=ffff9cc78fa8 items=0 ppid=5107 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.496000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 11 00:22:30.514000 audit[5299]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=5299 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 11 00:22:30.514000 audit[5299]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffc3117890 a2=0 a3=ffffb2ddbfa8 items=0 ppid=5107 pid=5299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.514000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 11 00:22:30.519000 audit[5302]: NETFILTER_CFG table=filter:122 family=2 entries=315 op=nft_register_chain pid=5302 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 11 00:22:30.519000 audit[5302]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=187764 a0=3 a1=fffff4b7e440 a2=0 a3=ffff80c6efa8 items=0 ppid=5107 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.519000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 11 00:22:30.614388 systemd[1]: Started sshd@11-10.0.0.33:22-10.0.0.1:51548.service. Jul 11 00:22:30.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.33:22-10.0.0.1:51548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:30.621855 sshd[5214]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:30.622000 audit[5214]: USER_END pid=5214 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:30.622000 audit[5214]: CRED_DISP pid=5214 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:30.625850 systemd[1]: sshd@10-10.0.0.33:22-10.0.0.1:51542.service: Deactivated successfully. Jul 11 00:22:30.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.33:22-10.0.0.1:51542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:30.628087 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:22:30.628114 systemd-logind[1302]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:22:30.629388 systemd-logind[1302]: Removed session 11. Jul 11 00:22:30.665000 audit[5315]: USER_ACCT pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:30.665000 audit[5315]: CRED_ACQ pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:30.665000 audit[5315]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe4987350 a2=3 a3=1 items=0 ppid=1 pid=5315 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:30.665000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:30.667938 sshd[5315]: Accepted publickey for core from 10.0.0.1 port 51548 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:22:30.667647 sshd[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:22:30.671802 systemd-logind[1302]: New session 12 of user core. Jul 11 00:22:30.672252 systemd[1]: Started session-12.scope. Jul 11 00:22:30.675000 audit[5315]: USER_START pid=5315 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:30.677000 audit[5321]: CRED_ACQ pid=5321 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:30.790908 sshd[5315]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:30.790000 audit[5315]: USER_END pid=5315 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:30.791000 audit[5315]: CRED_DISP pid=5315 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:30.793697 systemd-logind[1302]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:22:30.794113 systemd[1]: sshd@11-10.0.0.33:22-10.0.0.1:51548.service: Deactivated successfully. Jul 11 00:22:30.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.33:22-10.0.0.1:51548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:30.794951 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:22:30.796108 systemd-logind[1302]: Removed session 12. Jul 11 00:22:30.949224 kubelet[2117]: I0711 00:22:30.949177 2117 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 11 00:22:30.949620 kubelet[2117]: I0711 00:22:30.949297 2117 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 11 00:22:31.058937 kubelet[2117]: I0711 00:22:31.058791 2117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-q27mn" podStartSLOduration=24.113012514 podStartE2EDuration="31.058774092s" podCreationTimestamp="2025-07-11 00:22:00 +0000 UTC" firstStartedPulling="2025-07-11 00:22:23.364207812 +0000 UTC m=+43.594664937" lastFinishedPulling="2025-07-11 00:22:30.30996939 +0000 UTC m=+50.540426515" observedRunningTime="2025-07-11 00:22:31.057817935 +0000 UTC m=+51.288275100" watchObservedRunningTime="2025-07-11 00:22:31.058774092 +0000 UTC m=+51.289231217" Jul 11 00:22:32.122040 systemd-networkd[1099]: vxlan.calico: Gained IPv6LL Jul 11 00:22:35.793935 systemd[1]: Started sshd@12-10.0.0.33:22-10.0.0.1:36080.service. Jul 11 00:22:35.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.33:22-10.0.0.1:36080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:35.795128 kernel: kauditd_printk_skb: 569 callbacks suppressed Jul 11 00:22:35.795313 kernel: audit: type=1130 audit(1752193355.793:467): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.33:22-10.0.0.1:36080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:35.839000 audit[5347]: USER_ACCT pid=5347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:35.840148 sshd[5347]: Accepted publickey for core from 10.0.0.1 port 36080 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:22:35.841433 sshd[5347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:22:35.840000 audit[5347]: CRED_ACQ pid=5347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:35.846310 kernel: audit: type=1101 audit(1752193355.839:468): pid=5347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:35.846379 kernel: audit: type=1103 audit(1752193355.840:469): pid=5347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:35.846410 kernel: audit: type=1006 audit(1752193355.840:470): pid=5347 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jul 11 00:22:35.845958 systemd-logind[1302]: New session 13 of user core. Jul 11 00:22:35.846086 systemd[1]: Started session-13.scope. Jul 11 00:22:35.840000 audit[5347]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd1e86e30 a2=3 a3=1 items=0 ppid=1 pid=5347 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:35.851108 kernel: audit: type=1300 audit(1752193355.840:470): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd1e86e30 a2=3 a3=1 items=0 ppid=1 pid=5347 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:35.851170 kernel: audit: type=1327 audit(1752193355.840:470): proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:35.840000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:35.849000 audit[5347]: USER_START pid=5347 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:35.855484 kernel: audit: type=1105 audit(1752193355.849:471): pid=5347 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:35.855551 kernel: audit: type=1103 audit(1752193355.850:472): pid=5350 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:35.850000 audit[5350]: CRED_ACQ pid=5350 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:35.991101 sshd[5347]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:35.991000 audit[5347]: USER_END pid=5347 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:35.993791 systemd[1]: sshd@12-10.0.0.33:22-10.0.0.1:36080.service: Deactivated successfully. Jul 11 00:22:35.994766 systemd-logind[1302]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:22:35.994807 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:22:35.995744 systemd-logind[1302]: Removed session 13. Jul 11 00:22:35.991000 audit[5347]: CRED_DISP pid=5347 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:35.999028 kernel: audit: type=1106 audit(1752193355.991:473): pid=5347 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:35.999085 kernel: audit: type=1104 audit(1752193355.991:474): pid=5347 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:35.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.33:22-10.0.0.1:36080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:39.872978 env[1316]: time="2025-07-11T00:22:39.872891161Z" level=info msg="StopPodSandbox for \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\"" Jul 11 00:22:39.970922 env[1316]: 2025-07-11 00:22:39.931 [WARNING][5371] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"26d03df9-579e-4ee3-a314-28ef2eef7859", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7", Pod:"coredns-7c65d6cfc9-xr7vr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c5a69422d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:39.970922 env[1316]: 2025-07-11 00:22:39.932 [INFO][5371] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Jul 11 00:22:39.970922 env[1316]: 2025-07-11 00:22:39.932 [INFO][5371] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" iface="eth0" netns="" Jul 11 00:22:39.970922 env[1316]: 2025-07-11 00:22:39.932 [INFO][5371] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Jul 11 00:22:39.970922 env[1316]: 2025-07-11 00:22:39.932 [INFO][5371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Jul 11 00:22:39.970922 env[1316]: 2025-07-11 00:22:39.954 [INFO][5382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" HandleID="k8s-pod-network.a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Workload="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" Jul 11 00:22:39.970922 env[1316]: 2025-07-11 00:22:39.955 [INFO][5382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:39.970922 env[1316]: 2025-07-11 00:22:39.955 [INFO][5382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:39.970922 env[1316]: 2025-07-11 00:22:39.964 [WARNING][5382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" HandleID="k8s-pod-network.a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Workload="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" Jul 11 00:22:39.970922 env[1316]: 2025-07-11 00:22:39.964 [INFO][5382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" HandleID="k8s-pod-network.a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Workload="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" Jul 11 00:22:39.970922 env[1316]: 2025-07-11 00:22:39.966 [INFO][5382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:39.970922 env[1316]: 2025-07-11 00:22:39.968 [INFO][5371] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Jul 11 00:22:39.971388 env[1316]: time="2025-07-11T00:22:39.970943297Z" level=info msg="TearDown network for sandbox \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\" successfully" Jul 11 00:22:39.971388 env[1316]: time="2025-07-11T00:22:39.970975937Z" level=info msg="StopPodSandbox for \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\" returns successfully" Jul 11 00:22:39.971605 env[1316]: time="2025-07-11T00:22:39.971566616Z" level=info msg="RemovePodSandbox for \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\"" Jul 11 00:22:39.971656 env[1316]: time="2025-07-11T00:22:39.971613575Z" level=info msg="Forcibly stopping sandbox \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\"" Jul 11 00:22:40.035578 env[1316]: 2025-07-11 00:22:40.004 [WARNING][5399] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"26d03df9-579e-4ee3-a314-28ef2eef7859", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8954624b3d01e9fdebb29691c0d7ab90e8594f28e2b0df52afdc506a019bcb7", Pod:"coredns-7c65d6cfc9-xr7vr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c5a69422d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:40.035578 env[1316]: 2025-07-11 00:22:40.005 [INFO][5399] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Jul 11 00:22:40.035578 env[1316]: 2025-07-11 00:22:40.005 [INFO][5399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" iface="eth0" netns="" Jul 11 00:22:40.035578 env[1316]: 2025-07-11 00:22:40.005 [INFO][5399] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Jul 11 00:22:40.035578 env[1316]: 2025-07-11 00:22:40.005 [INFO][5399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Jul 11 00:22:40.035578 env[1316]: 2025-07-11 00:22:40.022 [INFO][5409] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" HandleID="k8s-pod-network.a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Workload="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" Jul 11 00:22:40.035578 env[1316]: 2025-07-11 00:22:40.022 [INFO][5409] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:40.035578 env[1316]: 2025-07-11 00:22:40.022 [INFO][5409] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:40.035578 env[1316]: 2025-07-11 00:22:40.031 [WARNING][5409] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" HandleID="k8s-pod-network.a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Workload="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" Jul 11 00:22:40.035578 env[1316]: 2025-07-11 00:22:40.031 [INFO][5409] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" HandleID="k8s-pod-network.a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Workload="localhost-k8s-coredns--7c65d6cfc9--xr7vr-eth0" Jul 11 00:22:40.035578 env[1316]: 2025-07-11 00:22:40.032 [INFO][5409] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:40.035578 env[1316]: 2025-07-11 00:22:40.033 [INFO][5399] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63" Jul 11 00:22:40.036032 env[1316]: time="2025-07-11T00:22:40.035606045Z" level=info msg="TearDown network for sandbox \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\" successfully" Jul 11 00:22:40.060237 env[1316]: time="2025-07-11T00:22:40.060165661Z" level=info msg="RemovePodSandbox \"a6d98c80a2a05758ab03170b7a7d8a91f2f612606532556882884c4dede30d63\" returns successfully" Jul 11 00:22:40.060911 env[1316]: time="2025-07-11T00:22:40.060867699Z" level=info msg="StopPodSandbox for \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\"" Jul 11 00:22:40.142646 env[1316]: 2025-07-11 00:22:40.106 [WARNING][5427] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0", GenerateName:"calico-apiserver-5f447458f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"0487ab34-6b79-457d-aa32-776a344009da", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f447458f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151", Pod:"calico-apiserver-5f447458f6-qfmmn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0fa25579d8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:40.142646 env[1316]: 2025-07-11 00:22:40.107 [INFO][5427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Jul 11 00:22:40.142646 env[1316]: 2025-07-11 00:22:40.107 [INFO][5427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" iface="eth0" netns="" Jul 11 00:22:40.142646 env[1316]: 2025-07-11 00:22:40.107 [INFO][5427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Jul 11 00:22:40.142646 env[1316]: 2025-07-11 00:22:40.107 [INFO][5427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Jul 11 00:22:40.142646 env[1316]: 2025-07-11 00:22:40.129 [INFO][5435] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" HandleID="k8s-pod-network.a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Workload="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" Jul 11 00:22:40.142646 env[1316]: 2025-07-11 00:22:40.129 [INFO][5435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:40.142646 env[1316]: 2025-07-11 00:22:40.129 [INFO][5435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:40.142646 env[1316]: 2025-07-11 00:22:40.137 [WARNING][5435] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" HandleID="k8s-pod-network.a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Workload="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" Jul 11 00:22:40.142646 env[1316]: 2025-07-11 00:22:40.137 [INFO][5435] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" HandleID="k8s-pod-network.a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Workload="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" Jul 11 00:22:40.142646 env[1316]: 2025-07-11 00:22:40.139 [INFO][5435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:40.142646 env[1316]: 2025-07-11 00:22:40.140 [INFO][5427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Jul 11 00:22:40.144339 env[1316]: time="2025-07-11T00:22:40.142609524Z" level=info msg="TearDown network for sandbox \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\" successfully" Jul 11 00:22:40.144339 env[1316]: time="2025-07-11T00:22:40.143375002Z" level=info msg="StopPodSandbox for \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\" returns successfully" Jul 11 00:22:40.144339 env[1316]: time="2025-07-11T00:22:40.144244160Z" level=info msg="RemovePodSandbox for \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\"" Jul 11 00:22:40.144339 env[1316]: time="2025-07-11T00:22:40.144279880Z" level=info msg="Forcibly stopping sandbox \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\"" Jul 11 00:22:40.225605 env[1316]: 2025-07-11 00:22:40.189 [WARNING][5453] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0", GenerateName:"calico-apiserver-5f447458f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"0487ab34-6b79-457d-aa32-776a344009da", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f447458f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"921f847261f4fc764eba3cbfbd3a365e7fe887df0ba69573360405b4e7e4b151", Pod:"calico-apiserver-5f447458f6-qfmmn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0fa25579d8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:40.225605 env[1316]: 2025-07-11 00:22:40.189 [INFO][5453] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Jul 11 00:22:40.225605 env[1316]: 2025-07-11 00:22:40.189 [INFO][5453] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" iface="eth0" netns="" Jul 11 00:22:40.225605 env[1316]: 2025-07-11 00:22:40.189 [INFO][5453] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Jul 11 00:22:40.225605 env[1316]: 2025-07-11 00:22:40.190 [INFO][5453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Jul 11 00:22:40.225605 env[1316]: 2025-07-11 00:22:40.212 [INFO][5461] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" HandleID="k8s-pod-network.a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Workload="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" Jul 11 00:22:40.225605 env[1316]: 2025-07-11 00:22:40.212 [INFO][5461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:40.225605 env[1316]: 2025-07-11 00:22:40.212 [INFO][5461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:40.225605 env[1316]: 2025-07-11 00:22:40.220 [WARNING][5461] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" HandleID="k8s-pod-network.a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Workload="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" Jul 11 00:22:40.225605 env[1316]: 2025-07-11 00:22:40.220 [INFO][5461] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" HandleID="k8s-pod-network.a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Workload="localhost-k8s-calico--apiserver--5f447458f6--qfmmn-eth0" Jul 11 00:22:40.225605 env[1316]: 2025-07-11 00:22:40.222 [INFO][5461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:40.225605 env[1316]: 2025-07-11 00:22:40.223 [INFO][5453] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02" Jul 11 00:22:40.226052 env[1316]: time="2025-07-11T00:22:40.225632346Z" level=info msg="TearDown network for sandbox \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\" successfully" Jul 11 00:22:40.228681 env[1316]: time="2025-07-11T00:22:40.228645458Z" level=info msg="RemovePodSandbox \"a24c91fba9accf1c6e205df03f4c59bb05d9e2d89d92cd1107c6c2137e1b3c02\" returns successfully" Jul 11 00:22:40.229216 env[1316]: time="2025-07-11T00:22:40.229189217Z" level=info msg="StopPodSandbox for \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\"" Jul 11 00:22:40.300012 env[1316]: 2025-07-11 00:22:40.265 [WARNING][5479] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q27mn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4568eb55-c992-4e0f-86d7-395721225945", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b", Pod:"csi-node-driver-q27mn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib894ea9de32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:40.300012 env[1316]: 2025-07-11 00:22:40.266 [INFO][5479] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Jul 11 00:22:40.300012 env[1316]: 2025-07-11 00:22:40.267 [INFO][5479] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" iface="eth0" netns="" Jul 11 00:22:40.300012 env[1316]: 2025-07-11 00:22:40.267 [INFO][5479] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Jul 11 00:22:40.300012 env[1316]: 2025-07-11 00:22:40.267 [INFO][5479] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Jul 11 00:22:40.300012 env[1316]: 2025-07-11 00:22:40.285 [INFO][5488] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" HandleID="k8s-pod-network.0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Workload="localhost-k8s-csi--node--driver--q27mn-eth0" Jul 11 00:22:40.300012 env[1316]: 2025-07-11 00:22:40.285 [INFO][5488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:40.300012 env[1316]: 2025-07-11 00:22:40.285 [INFO][5488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:40.300012 env[1316]: 2025-07-11 00:22:40.294 [WARNING][5488] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" HandleID="k8s-pod-network.0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Workload="localhost-k8s-csi--node--driver--q27mn-eth0" Jul 11 00:22:40.300012 env[1316]: 2025-07-11 00:22:40.294 [INFO][5488] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" HandleID="k8s-pod-network.0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Workload="localhost-k8s-csi--node--driver--q27mn-eth0" Jul 11 00:22:40.300012 env[1316]: 2025-07-11 00:22:40.296 [INFO][5488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:40.300012 env[1316]: 2025-07-11 00:22:40.298 [INFO][5479] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Jul 11 00:22:40.300470 env[1316]: time="2025-07-11T00:22:40.300045111Z" level=info msg="TearDown network for sandbox \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\" successfully" Jul 11 00:22:40.300470 env[1316]: time="2025-07-11T00:22:40.300080751Z" level=info msg="StopPodSandbox for \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\" returns successfully" Jul 11 00:22:40.301337 env[1316]: time="2025-07-11T00:22:40.300914028Z" level=info msg="RemovePodSandbox for \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\"" Jul 11 00:22:40.301337 env[1316]: time="2025-07-11T00:22:40.300949228Z" level=info msg="Forcibly stopping sandbox \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\"" Jul 11 00:22:40.369852 env[1316]: 2025-07-11 00:22:40.336 [WARNING][5507] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q27mn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4568eb55-c992-4e0f-86d7-395721225945", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5ed5f472ef2ddc1e318bb77ff883c19c4ae6750c51edb406fb2257e0d57870b", Pod:"csi-node-driver-q27mn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib894ea9de32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:40.369852 env[1316]: 2025-07-11 00:22:40.337 [INFO][5507] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Jul 11 00:22:40.369852 env[1316]: 2025-07-11 00:22:40.337 [INFO][5507] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" iface="eth0" netns="" Jul 11 00:22:40.369852 env[1316]: 2025-07-11 00:22:40.337 [INFO][5507] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Jul 11 00:22:40.369852 env[1316]: 2025-07-11 00:22:40.337 [INFO][5507] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Jul 11 00:22:40.369852 env[1316]: 2025-07-11 00:22:40.355 [INFO][5516] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" HandleID="k8s-pod-network.0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Workload="localhost-k8s-csi--node--driver--q27mn-eth0" Jul 11 00:22:40.369852 env[1316]: 2025-07-11 00:22:40.355 [INFO][5516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:40.369852 env[1316]: 2025-07-11 00:22:40.355 [INFO][5516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:40.369852 env[1316]: 2025-07-11 00:22:40.364 [WARNING][5516] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" HandleID="k8s-pod-network.0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Workload="localhost-k8s-csi--node--driver--q27mn-eth0" Jul 11 00:22:40.369852 env[1316]: 2025-07-11 00:22:40.364 [INFO][5516] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" HandleID="k8s-pod-network.0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Workload="localhost-k8s-csi--node--driver--q27mn-eth0" Jul 11 00:22:40.369852 env[1316]: 2025-07-11 00:22:40.365 [INFO][5516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:40.369852 env[1316]: 2025-07-11 00:22:40.367 [INFO][5507] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af" Jul 11 00:22:40.370271 env[1316]: time="2025-07-11T00:22:40.369881487Z" level=info msg="TearDown network for sandbox \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\" successfully" Jul 11 00:22:40.372705 env[1316]: time="2025-07-11T00:22:40.372667200Z" level=info msg="RemovePodSandbox \"0f1094c34c30deb8c0abfe3320b363a62b13756c3479bf79fddc8b5c5375a0af\" returns successfully" Jul 11 00:22:40.373183 env[1316]: time="2025-07-11T00:22:40.373150359Z" level=info msg="StopPodSandbox for \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\"" Jul 11 00:22:40.437632 env[1316]: 2025-07-11 00:22:40.406 [WARNING][5533] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"391acb18-2a72-4443-9d5f-c7fd9457ee12", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f", Pod:"coredns-7c65d6cfc9-pzkpx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55f974b092d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:40.437632 env[1316]: 2025-07-11 00:22:40.406 [INFO][5533] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Jul 11 00:22:40.437632 env[1316]: 2025-07-11 00:22:40.406 [INFO][5533] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" iface="eth0" netns="" Jul 11 00:22:40.437632 env[1316]: 2025-07-11 00:22:40.406 [INFO][5533] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Jul 11 00:22:40.437632 env[1316]: 2025-07-11 00:22:40.406 [INFO][5533] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Jul 11 00:22:40.437632 env[1316]: 2025-07-11 00:22:40.423 [INFO][5542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" HandleID="k8s-pod-network.cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Workload="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" Jul 11 00:22:40.437632 env[1316]: 2025-07-11 00:22:40.423 [INFO][5542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:40.437632 env[1316]: 2025-07-11 00:22:40.423 [INFO][5542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:40.437632 env[1316]: 2025-07-11 00:22:40.431 [WARNING][5542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" HandleID="k8s-pod-network.cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Workload="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" Jul 11 00:22:40.437632 env[1316]: 2025-07-11 00:22:40.431 [INFO][5542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" HandleID="k8s-pod-network.cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Workload="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" Jul 11 00:22:40.437632 env[1316]: 2025-07-11 00:22:40.433 [INFO][5542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:40.437632 env[1316]: 2025-07-11 00:22:40.435 [INFO][5533] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Jul 11 00:22:40.437632 env[1316]: time="2025-07-11T00:22:40.437608109Z" level=info msg="TearDown network for sandbox \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\" successfully" Jul 11 00:22:40.439045 env[1316]: time="2025-07-11T00:22:40.437638589Z" level=info msg="StopPodSandbox for \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\" returns successfully" Jul 11 00:22:40.439045 env[1316]: time="2025-07-11T00:22:40.438311427Z" level=info msg="RemovePodSandbox for \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\"" Jul 11 00:22:40.439045 env[1316]: time="2025-07-11T00:22:40.438346267Z" level=info msg="Forcibly stopping sandbox \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\"" Jul 11 00:22:40.502774 env[1316]: 2025-07-11 00:22:40.470 [WARNING][5560] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"391acb18-2a72-4443-9d5f-c7fd9457ee12", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24e9f5233752ac5425663d2bf891db4b167d665f2673428c071db6e6dfbe9b4f", Pod:"coredns-7c65d6cfc9-pzkpx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55f974b092d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:40.502774 env[1316]: 2025-07-11 00:22:40.471 [INFO][5560] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Jul 11 00:22:40.502774 env[1316]: 2025-07-11 00:22:40.471 [INFO][5560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" iface="eth0" netns="" Jul 11 00:22:40.502774 env[1316]: 2025-07-11 00:22:40.471 [INFO][5560] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Jul 11 00:22:40.502774 env[1316]: 2025-07-11 00:22:40.471 [INFO][5560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Jul 11 00:22:40.502774 env[1316]: 2025-07-11 00:22:40.488 [INFO][5569] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" HandleID="k8s-pod-network.cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Workload="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" Jul 11 00:22:40.502774 env[1316]: 2025-07-11 00:22:40.488 [INFO][5569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:40.502774 env[1316]: 2025-07-11 00:22:40.488 [INFO][5569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:40.502774 env[1316]: 2025-07-11 00:22:40.496 [WARNING][5569] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" HandleID="k8s-pod-network.cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Workload="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" Jul 11 00:22:40.502774 env[1316]: 2025-07-11 00:22:40.496 [INFO][5569] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" HandleID="k8s-pod-network.cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Workload="localhost-k8s-coredns--7c65d6cfc9--pzkpx-eth0" Jul 11 00:22:40.502774 env[1316]: 2025-07-11 00:22:40.498 [INFO][5569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:40.502774 env[1316]: 2025-07-11 00:22:40.501 [INFO][5560] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36" Jul 11 00:22:40.503205 env[1316]: time="2025-07-11T00:22:40.502806178Z" level=info msg="TearDown network for sandbox \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\" successfully" Jul 11 00:22:40.506312 env[1316]: time="2025-07-11T00:22:40.506265289Z" level=info msg="RemovePodSandbox \"cb5aba68b8727593c91df3a7779128481ed73f7d1c9b7d1873a0c055ce76cc36\" returns successfully" Jul 11 00:22:40.506815 env[1316]: time="2025-07-11T00:22:40.506769487Z" level=info msg="StopPodSandbox for \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\"" Jul 11 00:22:40.583424 env[1316]: 2025-07-11 00:22:40.550 [WARNING][5592] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0", GenerateName:"calico-apiserver-5f447458f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f447458f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200", Pod:"calico-apiserver-5f447458f6-544gl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7b7ab47d90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:40.583424 env[1316]: 2025-07-11 00:22:40.550 [INFO][5592] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Jul 11 00:22:40.583424 env[1316]: 2025-07-11 00:22:40.550 [INFO][5592] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" iface="eth0" netns="" Jul 11 00:22:40.583424 env[1316]: 2025-07-11 00:22:40.550 [INFO][5592] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Jul 11 00:22:40.583424 env[1316]: 2025-07-11 00:22:40.550 [INFO][5592] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Jul 11 00:22:40.583424 env[1316]: 2025-07-11 00:22:40.569 [INFO][5604] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" HandleID="k8s-pod-network.2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Workload="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" Jul 11 00:22:40.583424 env[1316]: 2025-07-11 00:22:40.569 [INFO][5604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:40.583424 env[1316]: 2025-07-11 00:22:40.569 [INFO][5604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:40.583424 env[1316]: 2025-07-11 00:22:40.578 [WARNING][5604] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" HandleID="k8s-pod-network.2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Workload="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" Jul 11 00:22:40.583424 env[1316]: 2025-07-11 00:22:40.578 [INFO][5604] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" HandleID="k8s-pod-network.2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Workload="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" Jul 11 00:22:40.583424 env[1316]: 2025-07-11 00:22:40.579 [INFO][5604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:40.583424 env[1316]: 2025-07-11 00:22:40.581 [INFO][5592] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Jul 11 00:22:40.583922 env[1316]: time="2025-07-11T00:22:40.583451606Z" level=info msg="TearDown network for sandbox \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\" successfully" Jul 11 00:22:40.583922 env[1316]: time="2025-07-11T00:22:40.583482326Z" level=info msg="StopPodSandbox for \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\" returns successfully" Jul 11 00:22:40.584105 env[1316]: time="2025-07-11T00:22:40.584062164Z" level=info msg="RemovePodSandbox for \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\"" Jul 11 00:22:40.584146 env[1316]: time="2025-07-11T00:22:40.584098884Z" level=info msg="Forcibly stopping sandbox \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\"" Jul 11 00:22:40.652010 env[1316]: 2025-07-11 00:22:40.616 [WARNING][5623] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0", GenerateName:"calico-apiserver-5f447458f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"60d3bbef-c429-4b0c-9f0e-ab7b1bc7e60a", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f447458f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"456b265775a11011dcffc0905c8d655b44420c052ac30635ff870eab25219200", Pod:"calico-apiserver-5f447458f6-544gl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7b7ab47d90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:40.652010 env[1316]: 2025-07-11 00:22:40.617 [INFO][5623] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Jul 11 00:22:40.652010 env[1316]: 2025-07-11 00:22:40.617 [INFO][5623] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" iface="eth0" netns="" Jul 11 00:22:40.652010 env[1316]: 2025-07-11 00:22:40.617 [INFO][5623] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Jul 11 00:22:40.652010 env[1316]: 2025-07-11 00:22:40.617 [INFO][5623] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Jul 11 00:22:40.652010 env[1316]: 2025-07-11 00:22:40.635 [INFO][5632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" HandleID="k8s-pod-network.2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Workload="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" Jul 11 00:22:40.652010 env[1316]: 2025-07-11 00:22:40.635 [INFO][5632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:40.652010 env[1316]: 2025-07-11 00:22:40.635 [INFO][5632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:40.652010 env[1316]: 2025-07-11 00:22:40.643 [WARNING][5632] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" HandleID="k8s-pod-network.2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Workload="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" Jul 11 00:22:40.652010 env[1316]: 2025-07-11 00:22:40.643 [INFO][5632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" HandleID="k8s-pod-network.2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Workload="localhost-k8s-calico--apiserver--5f447458f6--544gl-eth0" Jul 11 00:22:40.652010 env[1316]: 2025-07-11 00:22:40.645 [INFO][5632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:40.652010 env[1316]: 2025-07-11 00:22:40.647 [INFO][5623] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503" Jul 11 00:22:40.652422 env[1316]: time="2025-07-11T00:22:40.652040226Z" level=info msg="TearDown network for sandbox \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\" successfully" Jul 11 00:22:40.654981 env[1316]: time="2025-07-11T00:22:40.654943658Z" level=info msg="RemovePodSandbox \"2444de6baef213fd00188dac12d37617492473945e46f1d99a94e7ba8390b503\" returns successfully" Jul 11 00:22:40.655507 env[1316]: time="2025-07-11T00:22:40.655470497Z" level=info msg="StopPodSandbox for \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\"" Jul 11 00:22:40.717989 env[1316]: 2025-07-11 00:22:40.686 [WARNING][5651] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" WorkloadEndpoint="localhost-k8s-whisker--7474848f44--d8mxf-eth0" Jul 11 00:22:40.717989 env[1316]: 2025-07-11 00:22:40.686 [INFO][5651] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Jul 11 00:22:40.717989 env[1316]: 2025-07-11 00:22:40.686 [INFO][5651] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" iface="eth0" netns="" Jul 11 00:22:40.717989 env[1316]: 2025-07-11 00:22:40.686 [INFO][5651] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Jul 11 00:22:40.717989 env[1316]: 2025-07-11 00:22:40.686 [INFO][5651] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Jul 11 00:22:40.717989 env[1316]: 2025-07-11 00:22:40.704 [INFO][5660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" HandleID="k8s-pod-network.0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Workload="localhost-k8s-whisker--7474848f44--d8mxf-eth0" Jul 11 00:22:40.717989 env[1316]: 2025-07-11 00:22:40.704 [INFO][5660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:40.717989 env[1316]: 2025-07-11 00:22:40.704 [INFO][5660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:40.717989 env[1316]: 2025-07-11 00:22:40.712 [WARNING][5660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" HandleID="k8s-pod-network.0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Workload="localhost-k8s-whisker--7474848f44--d8mxf-eth0" Jul 11 00:22:40.717989 env[1316]: 2025-07-11 00:22:40.712 [INFO][5660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" HandleID="k8s-pod-network.0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Workload="localhost-k8s-whisker--7474848f44--d8mxf-eth0" Jul 11 00:22:40.717989 env[1316]: 2025-07-11 00:22:40.714 [INFO][5660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:40.717989 env[1316]: 2025-07-11 00:22:40.716 [INFO][5651] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Jul 11 00:22:40.719208 env[1316]: time="2025-07-11T00:22:40.717975933Z" level=info msg="TearDown network for sandbox \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\" successfully" Jul 11 00:22:40.719208 env[1316]: time="2025-07-11T00:22:40.718007652Z" level=info msg="StopPodSandbox for \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\" returns successfully" Jul 11 00:22:40.719208 env[1316]: time="2025-07-11T00:22:40.718452291Z" level=info msg="RemovePodSandbox for \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\"" Jul 11 00:22:40.719208 env[1316]: time="2025-07-11T00:22:40.718493411Z" level=info msg="Forcibly stopping sandbox \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\"" Jul 11 00:22:40.781522 env[1316]: 2025-07-11 00:22:40.750 [WARNING][5677] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" WorkloadEndpoint="localhost-k8s-whisker--7474848f44--d8mxf-eth0" Jul 11 00:22:40.781522 env[1316]: 2025-07-11 00:22:40.750 [INFO][5677] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Jul 11 00:22:40.781522 env[1316]: 2025-07-11 00:22:40.750 [INFO][5677] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" iface="eth0" netns="" Jul 11 00:22:40.781522 env[1316]: 2025-07-11 00:22:40.750 [INFO][5677] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Jul 11 00:22:40.781522 env[1316]: 2025-07-11 00:22:40.750 [INFO][5677] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Jul 11 00:22:40.781522 env[1316]: 2025-07-11 00:22:40.768 [INFO][5686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" HandleID="k8s-pod-network.0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Workload="localhost-k8s-whisker--7474848f44--d8mxf-eth0" Jul 11 00:22:40.781522 env[1316]: 2025-07-11 00:22:40.768 [INFO][5686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:40.781522 env[1316]: 2025-07-11 00:22:40.768 [INFO][5686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:40.781522 env[1316]: 2025-07-11 00:22:40.776 [WARNING][5686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" HandleID="k8s-pod-network.0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Workload="localhost-k8s-whisker--7474848f44--d8mxf-eth0" Jul 11 00:22:40.781522 env[1316]: 2025-07-11 00:22:40.776 [INFO][5686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" HandleID="k8s-pod-network.0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Workload="localhost-k8s-whisker--7474848f44--d8mxf-eth0" Jul 11 00:22:40.781522 env[1316]: 2025-07-11 00:22:40.777 [INFO][5686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:40.781522 env[1316]: 2025-07-11 00:22:40.779 [INFO][5677] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e" Jul 11 00:22:40.781865 env[1316]: time="2025-07-11T00:22:40.781551285Z" level=info msg="TearDown network for sandbox \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\" successfully" Jul 11 00:22:40.784489 env[1316]: time="2025-07-11T00:22:40.784439878Z" level=info msg="RemovePodSandbox \"0006a5d39b4557b41aa68a6a0b9862ab24797f3aeb000dfc2e575630d850573e\" returns successfully" Jul 11 00:22:40.784960 env[1316]: time="2025-07-11T00:22:40.784927357Z" level=info msg="StopPodSandbox for \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\"" Jul 11 00:22:40.850627 env[1316]: 2025-07-11 00:22:40.817 [WARNING][5704] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"8b96db4e-8484-47ca-a223-07747800a0c8", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb", Pod:"goldmane-58fd7646b9-z7hg6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali735d7e9929e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:40.850627 env[1316]: 2025-07-11 00:22:40.817 [INFO][5704] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Jul 11 00:22:40.850627 env[1316]: 2025-07-11 00:22:40.818 [INFO][5704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" iface="eth0" netns="" Jul 11 00:22:40.850627 env[1316]: 2025-07-11 00:22:40.818 [INFO][5704] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Jul 11 00:22:40.850627 env[1316]: 2025-07-11 00:22:40.818 [INFO][5704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Jul 11 00:22:40.850627 env[1316]: 2025-07-11 00:22:40.836 [INFO][5713] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" HandleID="k8s-pod-network.f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Workload="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" Jul 11 00:22:40.850627 env[1316]: 2025-07-11 00:22:40.836 [INFO][5713] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:40.850627 env[1316]: 2025-07-11 00:22:40.836 [INFO][5713] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:40.850627 env[1316]: 2025-07-11 00:22:40.845 [WARNING][5713] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" HandleID="k8s-pod-network.f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Workload="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" Jul 11 00:22:40.850627 env[1316]: 2025-07-11 00:22:40.845 [INFO][5713] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" HandleID="k8s-pod-network.f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Workload="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" Jul 11 00:22:40.850627 env[1316]: 2025-07-11 00:22:40.846 [INFO][5713] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:40.850627 env[1316]: 2025-07-11 00:22:40.848 [INFO][5704] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Jul 11 00:22:40.851059 env[1316]: time="2025-07-11T00:22:40.850656984Z" level=info msg="TearDown network for sandbox \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\" successfully" Jul 11 00:22:40.851059 env[1316]: time="2025-07-11T00:22:40.850687904Z" level=info msg="StopPodSandbox for \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\" returns successfully" Jul 11 00:22:40.851179 env[1316]: time="2025-07-11T00:22:40.851134023Z" level=info msg="RemovePodSandbox for \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\"" Jul 11 00:22:40.851210 env[1316]: time="2025-07-11T00:22:40.851177303Z" level=info msg="Forcibly stopping sandbox \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\"" Jul 11 00:22:40.917651 env[1316]: 2025-07-11 00:22:40.885 [WARNING][5730] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"8b96db4e-8484-47ca-a223-07747800a0c8", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"421f7aa1874bc069bc3b81f30ff60b304ffaed45a82227901e79cf5cd7dcdedb", Pod:"goldmane-58fd7646b9-z7hg6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali735d7e9929e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:40.917651 env[1316]: 2025-07-11 00:22:40.885 [INFO][5730] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Jul 11 00:22:40.917651 env[1316]: 2025-07-11 00:22:40.886 [INFO][5730] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" iface="eth0" netns="" Jul 11 00:22:40.917651 env[1316]: 2025-07-11 00:22:40.886 [INFO][5730] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Jul 11 00:22:40.917651 env[1316]: 2025-07-11 00:22:40.886 [INFO][5730] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Jul 11 00:22:40.917651 env[1316]: 2025-07-11 00:22:40.903 [INFO][5739] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" HandleID="k8s-pod-network.f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Workload="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" Jul 11 00:22:40.917651 env[1316]: 2025-07-11 00:22:40.903 [INFO][5739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:40.917651 env[1316]: 2025-07-11 00:22:40.903 [INFO][5739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:40.917651 env[1316]: 2025-07-11 00:22:40.911 [WARNING][5739] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" HandleID="k8s-pod-network.f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Workload="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" Jul 11 00:22:40.917651 env[1316]: 2025-07-11 00:22:40.911 [INFO][5739] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" HandleID="k8s-pod-network.f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Workload="localhost-k8s-goldmane--58fd7646b9--z7hg6-eth0" Jul 11 00:22:40.917651 env[1316]: 2025-07-11 00:22:40.913 [INFO][5739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:40.917651 env[1316]: 2025-07-11 00:22:40.915 [INFO][5730] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768" Jul 11 00:22:40.918304 env[1316]: time="2025-07-11T00:22:40.917692488Z" level=info msg="TearDown network for sandbox \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\" successfully" Jul 11 00:22:40.924186 env[1316]: time="2025-07-11T00:22:40.924139751Z" level=info msg="RemovePodSandbox \"f731560a030e644016a6b483cee2ae8ce7e60e066ad13baed7162fc82e8cb768\" returns successfully" Jul 11 00:22:40.924691 env[1316]: time="2025-07-11T00:22:40.924652190Z" level=info msg="StopPodSandbox for \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\"" Jul 11 00:22:40.990560 env[1316]: 2025-07-11 00:22:40.959 [WARNING][5757] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0", GenerateName:"calico-kube-controllers-7f89b684d9-", Namespace:"calico-system", SelfLink:"", UID:"149173fd-5331-45f1-97cd-d2699b6084a9", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f89b684d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6", Pod:"calico-kube-controllers-7f89b684d9-vbwhc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a5bc5932d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:40.990560 env[1316]: 2025-07-11 00:22:40.959 [INFO][5757] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Jul 11 00:22:40.990560 env[1316]: 2025-07-11 00:22:40.959 [INFO][5757] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" iface="eth0" netns="" Jul 11 00:22:40.990560 env[1316]: 2025-07-11 00:22:40.959 [INFO][5757] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Jul 11 00:22:40.990560 env[1316]: 2025-07-11 00:22:40.959 [INFO][5757] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Jul 11 00:22:40.990560 env[1316]: 2025-07-11 00:22:40.977 [INFO][5765] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" HandleID="k8s-pod-network.731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Workload="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" Jul 11 00:22:40.990560 env[1316]: 2025-07-11 00:22:40.977 [INFO][5765] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:40.990560 env[1316]: 2025-07-11 00:22:40.977 [INFO][5765] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:40.990560 env[1316]: 2025-07-11 00:22:40.985 [WARNING][5765] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" HandleID="k8s-pod-network.731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Workload="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" Jul 11 00:22:40.990560 env[1316]: 2025-07-11 00:22:40.985 [INFO][5765] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" HandleID="k8s-pod-network.731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Workload="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" Jul 11 00:22:40.990560 env[1316]: 2025-07-11 00:22:40.987 [INFO][5765] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:40.990560 env[1316]: 2025-07-11 00:22:40.988 [INFO][5757] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Jul 11 00:22:40.990976 env[1316]: time="2025-07-11T00:22:40.990590456Z" level=info msg="TearDown network for sandbox \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\" successfully" Jul 11 00:22:40.990976 env[1316]: time="2025-07-11T00:22:40.990622376Z" level=info msg="StopPodSandbox for \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\" returns successfully" Jul 11 00:22:40.991104 env[1316]: time="2025-07-11T00:22:40.991073015Z" level=info msg="RemovePodSandbox for \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\"" Jul 11 00:22:40.991150 env[1316]: time="2025-07-11T00:22:40.991116015Z" level=info msg="Forcibly stopping sandbox \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\"" Jul 11 00:22:40.994716 systemd[1]: Started sshd@13-10.0.0.33:22-10.0.0.1:36082.service. Jul 11 00:22:40.998746 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 11 00:22:40.998798 kernel: audit: type=1130 audit(1752193360.993:476): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.33:22-10.0.0.1:36082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:40.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.33:22-10.0.0.1:36082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:41.040000 audit[5780]: USER_ACCT pid=5780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:41.043064 sshd[5780]: Accepted publickey for core from 10.0.0.1 port 36082 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:22:41.045000 audit[5780]: CRED_ACQ pid=5780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:41.047114 sshd[5780]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:22:41.049565 kernel: audit: type=1101 audit(1752193361.040:477): pid=5780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:41.050687 kernel: audit: type=1103 audit(1752193361.045:478): pid=5780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:41.051936 kernel: audit: type=1006 audit(1752193361.045:479): pid=5780 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jul 11 00:22:41.051979 kernel: audit: type=1300 audit(1752193361.045:479): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffca7aea90 a2=3 a3=1 items=0 ppid=1 pid=5780 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:41.045000 audit[5780]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffca7aea90 a2=3 a3=1 items=0 ppid=1 pid=5780 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:41.050915 systemd-logind[1302]: New session 14 of user core. Jul 11 00:22:41.051721 systemd[1]: Started session-14.scope. Jul 11 00:22:41.045000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:41.055512 kernel: audit: type=1327 audit(1752193361.045:479): proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:41.054000 audit[5780]: USER_START pid=5780 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:41.060297 kernel: audit: type=1105 audit(1752193361.054:480): pid=5780 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:41.060346 kernel: audit: type=1103 audit(1752193361.056:481): pid=5800 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:41.056000 audit[5800]: CRED_ACQ pid=5800 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:41.089617 env[1316]: 2025-07-11 00:22:41.037 [WARNING][5784] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0", GenerateName:"calico-kube-controllers-7f89b684d9-", Namespace:"calico-system", SelfLink:"", UID:"149173fd-5331-45f1-97cd-d2699b6084a9", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f89b684d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c739ad2c2ce2d8a0cc541d4da3fc0cad84250c0b4cb2736f4abcfb43b2093a6", Pod:"calico-kube-controllers-7f89b684d9-vbwhc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a5bc5932d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:41.089617 env[1316]: 2025-07-11 00:22:41.037 [INFO][5784] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Jul 11 00:22:41.089617 env[1316]: 2025-07-11 00:22:41.037 [INFO][5784] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" iface="eth0" netns="" Jul 11 00:22:41.089617 env[1316]: 2025-07-11 00:22:41.037 [INFO][5784] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Jul 11 00:22:41.089617 env[1316]: 2025-07-11 00:22:41.037 [INFO][5784] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Jul 11 00:22:41.089617 env[1316]: 2025-07-11 00:22:41.067 [INFO][5793] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" HandleID="k8s-pod-network.731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Workload="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" Jul 11 00:22:41.089617 env[1316]: 2025-07-11 00:22:41.067 [INFO][5793] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:41.089617 env[1316]: 2025-07-11 00:22:41.067 [INFO][5793] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:41.089617 env[1316]: 2025-07-11 00:22:41.078 [WARNING][5793] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" HandleID="k8s-pod-network.731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Workload="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" Jul 11 00:22:41.089617 env[1316]: 2025-07-11 00:22:41.078 [INFO][5793] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" HandleID="k8s-pod-network.731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Workload="localhost-k8s-calico--kube--controllers--7f89b684d9--vbwhc-eth0" Jul 11 00:22:41.089617 env[1316]: 2025-07-11 00:22:41.080 [INFO][5793] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:41.089617 env[1316]: 2025-07-11 00:22:41.087 [INFO][5784] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef" Jul 11 00:22:41.090106 env[1316]: time="2025-07-11T00:22:41.089657082Z" level=info msg="TearDown network for sandbox \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\" successfully" Jul 11 00:22:41.098288 env[1316]: time="2025-07-11T00:22:41.098227580Z" level=info msg="RemovePodSandbox \"731305515d4f9c11c23aec4030fcb1f4a39fab62657700396b3fadf787ee60ef\" returns successfully" Jul 11 00:22:41.219595 sshd[5780]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:41.220000 audit[5780]: USER_END pid=5780 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:41.223627 systemd[1]: sshd@13-10.0.0.33:22-10.0.0.1:36082.service: Deactivated successfully. Jul 11 00:22:41.223673 systemd-logind[1302]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:22:41.224506 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:22:41.220000 audit[5780]: CRED_DISP pid=5780 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:41.226269 systemd-logind[1302]: Removed session 14. Jul 11 00:22:41.228712 kernel: audit: type=1106 audit(1752193361.220:482): pid=5780 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:41.228822 kernel: audit: type=1104 audit(1752193361.220:483): pid=5780 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:41.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.33:22-10.0.0.1:36082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:41.749872 systemd[1]: run-containerd-runc-k8s.io-e694c9ae94ffdbf8bb3761012ed7cc264ab889111b5e34061b025b1740665595-runc.RHyH9H.mount: Deactivated successfully. Jul 11 00:22:46.222818 systemd[1]: Started sshd@14-10.0.0.33:22-10.0.0.1:52910.service. Jul 11 00:22:46.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.33:22-10.0.0.1:52910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:46.223947 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 11 00:22:46.224022 kernel: audit: type=1130 audit(1752193366.222:485): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.33:22-10.0.0.1:52910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:46.266000 audit[5855]: USER_ACCT pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:46.268432 sshd[5855]: Accepted publickey for core from 10.0.0.1 port 52910 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:22:46.268862 sshd[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:22:46.267000 audit[5855]: CRED_ACQ pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:46.273310 kernel: audit: type=1101 audit(1752193366.266:486): pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:46.273392 kernel: audit: type=1103 audit(1752193366.267:487): pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:46.275292 kernel: audit: type=1006 audit(1752193366.267:488): pid=5855 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jul 11 00:22:46.267000 audit[5855]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff4e94e0 a2=3 a3=1 items=0 ppid=1 pid=5855 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:46.278640 kernel: audit: type=1300 audit(1752193366.267:488): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff4e94e0 a2=3 a3=1 items=0 ppid=1 pid=5855 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:46.267000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:46.280681 kernel: audit: type=1327 audit(1752193366.267:488): proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:46.280322 systemd-logind[1302]: New session 15 of user core. Jul 11 00:22:46.281052 systemd[1]: Started session-15.scope. Jul 11 00:22:46.283000 audit[5855]: USER_START pid=5855 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:46.285000 audit[5858]: CRED_ACQ pid=5858 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:46.292908 kernel: audit: type=1105 audit(1752193366.283:489): pid=5855 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:46.292975 kernel: audit: type=1103 audit(1752193366.285:490): pid=5858 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:46.426189 sshd[5855]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:46.426000 audit[5855]: USER_END pid=5855 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:46.428609 systemd[1]: sshd@14-10.0.0.33:22-10.0.0.1:52910.service: Deactivated successfully. Jul 11 00:22:46.429567 systemd-logind[1302]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:22:46.429618 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:22:46.430349 systemd-logind[1302]: Removed session 15. Jul 11 00:22:46.426000 audit[5855]: CRED_DISP pid=5855 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:46.433738 kernel: audit: type=1106 audit(1752193366.426:491): pid=5855 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:46.433825 kernel: audit: type=1104 audit(1752193366.426:492): pid=5855 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:46.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.33:22-10.0.0.1:52910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:50.637597 systemd[1]: run-containerd-runc-k8s.io-9e65a0ad2cbe2d50f12728e098e802362b0e92ff1e19852bd8c09a93be64b3c2-runc.nMSPTJ.mount: Deactivated successfully. Jul 11 00:22:51.429252 systemd[1]: Started sshd@15-10.0.0.33:22-10.0.0.1:52914.service. Jul 11 00:22:51.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.33:22-10.0.0.1:52914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:51.430722 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 11 00:22:51.430796 kernel: audit: type=1130 audit(1752193371.428:494): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.33:22-10.0.0.1:52914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:51.484000 audit[5899]: USER_ACCT pid=5899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:51.486227 sshd[5899]: Accepted publickey for core from 10.0.0.1 port 52914 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:22:51.488946 kernel: audit: type=1101 audit(1752193371.484:495): pid=5899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:51.488000 audit[5899]: CRED_ACQ pid=5899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:51.489949 sshd[5899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:22:51.495070 kernel: audit: type=1103 audit(1752193371.488:496): pid=5899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:51.495281 kernel: audit: type=1006 audit(1752193371.488:497): pid=5899 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jul 11 00:22:51.495312 kernel: audit: type=1300 audit(1752193371.488:497): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3e14900 a2=3 a3=1 items=0 ppid=1 pid=5899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:51.488000 audit[5899]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3e14900 a2=3 a3=1 items=0 ppid=1 pid=5899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:51.497359 systemd[1]: Started session-16.scope. Jul 11 00:22:51.497633 systemd-logind[1302]: New session 16 of user core. Jul 11 00:22:51.498239 kernel: audit: type=1327 audit(1752193371.488:497): proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:51.488000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:51.501000 audit[5899]: USER_START pid=5899 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:51.501000 audit[5902]: CRED_ACQ pid=5902 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:51.508775 kernel: audit: type=1105 audit(1752193371.501:498): pid=5899 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:51.508863 kernel: audit: type=1103 audit(1752193371.501:499): pid=5902 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:51.754317 sshd[5899]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:51.754000 audit[5899]: USER_END pid=5899 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:51.757496 systemd[1]: sshd@15-10.0.0.33:22-10.0.0.1:52914.service: Deactivated successfully. Jul 11 00:22:51.758599 systemd-logind[1302]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:22:51.758643 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:22:51.759462 systemd-logind[1302]: Removed session 16. Jul 11 00:22:51.755000 audit[5899]: CRED_DISP pid=5899 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:51.762701 kernel: audit: type=1106 audit(1752193371.754:500): pid=5899 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:51.762789 kernel: audit: type=1104 audit(1752193371.755:501): pid=5899 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:51.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.33:22-10.0.0.1:52914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:55.127616 systemd[1]: run-containerd-runc-k8s.io-288e96fc2f9eff07e79c299a656604981da3eebefb2d38ef761a8d7b5d220438-runc.a1XEn6.mount: Deactivated successfully. Jul 11 00:22:55.228000 audit[5935]: NETFILTER_CFG table=filter:123 family=2 entries=9 op=nft_register_rule pid=5935 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:55.228000 audit[5935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffff97eac10 a2=0 a3=1 items=0 ppid=2270 pid=5935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:55.228000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:55.236000 audit[5935]: NETFILTER_CFG table=nat:124 family=2 entries=31 op=nft_register_chain pid=5935 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:55.236000 audit[5935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=fffff97eac10 a2=0 a3=1 items=0 ppid=2270 pid=5935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:55.236000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:56.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.33:22-10.0.0.1:48274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:56.756832 systemd[1]: Started sshd@16-10.0.0.33:22-10.0.0.1:48274.service. Jul 11 00:22:56.760607 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 11 00:22:56.760700 kernel: audit: type=1130 audit(1752193376.756:505): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.33:22-10.0.0.1:48274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:56.806000 audit[5956]: USER_ACCT pid=5956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:56.807858 sshd[5956]: Accepted publickey for core from 10.0.0.1 port 48274 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:22:56.809169 sshd[5956]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:22:56.807000 audit[5956]: CRED_ACQ pid=5956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:56.813684 systemd[1]: Started session-17.scope. Jul 11 00:22:56.813954 kernel: audit: type=1101 audit(1752193376.806:506): pid=5956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:56.813997 kernel: audit: type=1103 audit(1752193376.807:507): pid=5956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:56.814021 kernel: audit: type=1006 audit(1752193376.807:508): pid=5956 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jul 11 00:22:56.814222 systemd-logind[1302]: New session 17 of user core. Jul 11 00:22:56.815785 kernel: audit: type=1300 audit(1752193376.807:508): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc628f750 a2=3 a3=1 items=0 ppid=1 pid=5956 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:56.807000 audit[5956]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc628f750 a2=3 a3=1 items=0 ppid=1 pid=5956 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:56.807000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:56.821810 kernel: audit: type=1327 audit(1752193376.807:508): proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:56.821860 kernel: audit: type=1105 audit(1752193376.819:509): pid=5956 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:56.819000 audit[5956]: USER_START pid=5956 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:56.820000 audit[5959]: CRED_ACQ pid=5959 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:56.827388 kernel: audit: type=1103 audit(1752193376.820:510): pid=5959 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:56.881260 kubelet[2117]: E0711 00:22:56.881211 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:56.973114 sshd[5956]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:56.974000 audit[5956]: USER_END pid=5956 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:56.976258 systemd[1]: Started sshd@17-10.0.0.33:22-10.0.0.1:48276.service. Jul 11 00:22:56.979958 kernel: audit: type=1106 audit(1752193376.974:511): pid=5956 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:56.979000 audit[5956]: CRED_DISP pid=5956 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:56.983410 systemd[1]: sshd@16-10.0.0.33:22-10.0.0.1:48274.service: Deactivated successfully. Jul 11 00:22:56.984398 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:22:56.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.33:22-10.0.0.1:48276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:56.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.33:22-10.0.0.1:48274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:56.984927 kernel: audit: type=1104 audit(1752193376.979:512): pid=5956 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:56.989534 systemd-logind[1302]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:22:56.990383 systemd-logind[1302]: Removed session 17. Jul 11 00:22:57.024000 audit[5968]: USER_ACCT pid=5968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:57.025353 sshd[5968]: Accepted publickey for core from 10.0.0.1 port 48276 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:22:57.026000 audit[5968]: CRED_ACQ pid=5968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:57.026000 audit[5968]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc2a2c070 a2=3 a3=1 items=0 ppid=1 pid=5968 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:57.026000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:57.027413 sshd[5968]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:22:57.031336 systemd-logind[1302]: New session 18 of user core. Jul 11 00:22:57.031767 systemd[1]: Started session-18.scope. Jul 11 00:22:57.036000 audit[5968]: USER_START pid=5968 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:57.037000 audit[5973]: CRED_ACQ pid=5973 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:57.304192 systemd[1]: Started sshd@18-10.0.0.33:22-10.0.0.1:48292.service. Jul 11 00:22:57.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.33:22-10.0.0.1:48292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:57.305289 sshd[5968]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:57.305000 audit[5968]: USER_END pid=5968 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:57.306000 audit[5968]: CRED_DISP pid=5968 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:57.308852 systemd[1]: sshd@17-10.0.0.33:22-10.0.0.1:48276.service: Deactivated successfully. Jul 11 00:22:57.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.33:22-10.0.0.1:48276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:57.310007 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:22:57.310071 systemd-logind[1302]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:22:57.311453 systemd-logind[1302]: Removed session 18. Jul 11 00:22:57.353000 audit[5980]: USER_ACCT pid=5980 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:57.354441 sshd[5980]: Accepted publickey for core from 10.0.0.1 port 48292 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:22:57.354000 audit[5980]: CRED_ACQ pid=5980 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:57.354000 audit[5980]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe0ec7430 a2=3 a3=1 items=0 ppid=1 pid=5980 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:57.354000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:57.356153 sshd[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:22:57.360113 systemd-logind[1302]: New session 19 of user core. Jul 11 00:22:57.360502 systemd[1]: Started session-19.scope. Jul 11 00:22:57.363000 audit[5980]: USER_START pid=5980 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:57.364000 audit[5985]: CRED_ACQ pid=5985 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:59.202000 audit[5997]: NETFILTER_CFG table=filter:125 family=2 entries=8 op=nft_register_rule pid=5997 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:59.202000 audit[5997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffdd746860 a2=0 a3=1 items=0 ppid=2270 pid=5997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:59.202000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:59.206662 sshd[5980]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:59.206000 audit[5980]: USER_END pid=5980 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:59.207773 systemd[1]: Started sshd@19-10.0.0.33:22-10.0.0.1:48298.service. Jul 11 00:22:59.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.33:22-10.0.0.1:48298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:59.207000 audit[5980]: CRED_DISP pid=5980 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:59.211872 systemd-logind[1302]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:22:59.212639 systemd[1]: sshd@18-10.0.0.33:22-10.0.0.1:48292.service: Deactivated successfully. Jul 11 00:22:59.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.33:22-10.0.0.1:48292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:59.213542 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:22:59.214122 systemd-logind[1302]: Removed session 19. Jul 11 00:22:59.210000 audit[5997]: NETFILTER_CFG table=nat:126 family=2 entries=26 op=nft_register_rule pid=5997 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:59.210000 audit[5997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffdd746860 a2=0 a3=1 items=0 ppid=2270 pid=5997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:59.210000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:59.235000 audit[6003]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=6003 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:59.235000 audit[6003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffd6d89b70 a2=0 a3=1 items=0 ppid=2270 pid=6003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:59.235000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:59.242000 audit[6003]: NETFILTER_CFG table=nat:128 family=2 entries=26 op=nft_register_rule pid=6003 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:22:59.242000 audit[6003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffd6d89b70 a2=0 a3=1 items=0 ppid=2270 pid=6003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:59.242000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:22:59.255000 audit[5998]: USER_ACCT pid=5998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:59.256622 sshd[5998]: Accepted publickey for core from 10.0.0.1 port 48298 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:22:59.256000 audit[5998]: CRED_ACQ pid=5998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:59.256000 audit[5998]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd6a4b5a0 a2=3 a3=1 items=0 ppid=1 pid=5998 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:59.256000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:59.257920 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:22:59.261960 systemd-logind[1302]: New session 20 of user core. Jul 11 00:22:59.262418 systemd[1]: Started session-20.scope. Jul 11 00:22:59.266000 audit[5998]: USER_START pid=5998 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:59.267000 audit[6005]: CRED_ACQ pid=6005 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:59.836617 sshd[5998]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:59.836000 audit[5998]: USER_END pid=5998 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:59.836000 audit[5998]: CRED_DISP pid=5998 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:59.839408 systemd[1]: Started sshd@20-10.0.0.33:22-10.0.0.1:48300.service. Jul 11 00:22:59.840051 systemd[1]: sshd@19-10.0.0.33:22-10.0.0.1:48298.service: Deactivated successfully. Jul 11 00:22:59.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.33:22-10.0.0.1:48300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:59.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.33:22-10.0.0.1:48298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:22:59.842478 systemd-logind[1302]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:22:59.842547 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:22:59.843615 systemd-logind[1302]: Removed session 20. Jul 11 00:22:59.890000 audit[6012]: USER_ACCT pid=6012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:59.891435 sshd[6012]: Accepted publickey for core from 10.0.0.1 port 48300 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:22:59.891000 audit[6012]: CRED_ACQ pid=6012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:59.891000 audit[6012]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe5d24db0 a2=3 a3=1 items=0 ppid=1 pid=6012 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:22:59.891000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:22:59.892813 sshd[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:22:59.897834 systemd-logind[1302]: New session 21 of user core. Jul 11 00:22:59.898261 systemd[1]: Started session-21.scope. Jul 11 00:22:59.903000 audit[6012]: USER_START pid=6012 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:22:59.906000 audit[6017]: CRED_ACQ pid=6017 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:00.041842 sshd[6012]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:00.041000 audit[6012]: USER_END pid=6012 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:00.042000 audit[6012]: CRED_DISP pid=6012 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:00.044381 systemd[1]: sshd@20-10.0.0.33:22-10.0.0.1:48300.service: Deactivated successfully. Jul 11 00:23:00.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.33:22-10.0.0.1:48300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:23:00.045441 systemd-logind[1302]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:23:00.045505 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:23:00.046590 systemd-logind[1302]: Removed session 21. Jul 11 00:23:00.101707 kubelet[2117]: I0711 00:23:00.101593 2117 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:23:00.256000 audit[6030]: NETFILTER_CFG table=filter:129 family=2 entries=32 op=nft_register_rule pid=6030 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:23:00.256000 audit[6030]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffc69db7b0 a2=0 a3=1 items=0 ppid=2270 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:23:00.256000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:23:00.262000 audit[6030]: NETFILTER_CFG table=nat:130 family=2 entries=38 op=nft_register_chain pid=6030 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:23:00.262000 audit[6030]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12772 a0=3 a1=ffffc69db7b0 a2=0 a3=1 items=0 ppid=2270 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:23:00.262000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:23:04.022000 audit[6032]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=6032 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:23:04.024153 kernel: kauditd_printk_skb: 63 callbacks suppressed Jul 11 00:23:04.024226 kernel: audit: type=1325 audit(1752193384.022:556): table=filter:131 family=2 entries=20 op=nft_register_rule pid=6032 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:23:04.022000 audit[6032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffe0aad100 a2=0 a3=1 items=0 ppid=2270 pid=6032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:23:04.030082 kernel: audit: type=1300 audit(1752193384.022:556): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffe0aad100 a2=0 a3=1 items=0 ppid=2270 pid=6032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:23:04.030128 kernel: audit: type=1327 audit(1752193384.022:556): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:23:04.022000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:23:04.034000 audit[6032]: NETFILTER_CFG table=nat:132 family=2 entries=110 op=nft_register_chain pid=6032 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:23:04.034000 audit[6032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffe0aad100 a2=0 a3=1 items=0 ppid=2270 pid=6032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:23:04.041900 kernel: audit: type=1325 audit(1752193384.034:557): table=nat:132 family=2 entries=110 op=nft_register_chain pid=6032 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 11 00:23:04.041963 kernel: audit: type=1300 audit(1752193384.034:557): arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffe0aad100 a2=0 a3=1 items=0 ppid=2270 pid=6032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:23:04.041982 kernel: audit: type=1327 audit(1752193384.034:557): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:23:04.034000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 11 00:23:04.882691 kubelet[2117]: E0711 00:23:04.882653 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:05.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.33:22-10.0.0.1:36482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:23:05.045564 systemd[1]: Started sshd@21-10.0.0.33:22-10.0.0.1:36482.service. Jul 11 00:23:05.048912 kernel: audit: type=1130 audit(1752193385.044:558): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.33:22-10.0.0.1:36482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:23:05.086000 audit[6034]: USER_ACCT pid=6034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:05.087721 sshd[6034]: Accepted publickey for core from 10.0.0.1 port 36482 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:23:05.090904 kernel: audit: type=1101 audit(1752193385.086:559): pid=6034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:05.091000 audit[6034]: CRED_ACQ pid=6034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:05.092618 sshd[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:23:05.097296 kernel: audit: type=1103 audit(1752193385.091:560): pid=6034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:05.097375 kernel: audit: type=1006 audit(1752193385.091:561): pid=6034 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jul 11 00:23:05.091000 audit[6034]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff727d3c0 a2=3 a3=1 items=0 ppid=1 pid=6034 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:23:05.091000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:23:05.101381 systemd[1]: Started session-22.scope. Jul 11 00:23:05.101678 systemd-logind[1302]: New session 22 of user core. Jul 11 00:23:05.106000 audit[6034]: USER_START pid=6034 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:05.107000 audit[6037]: CRED_ACQ pid=6037 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:05.237290 sshd[6034]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:05.237000 audit[6034]: USER_END pid=6034 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:05.237000 audit[6034]: CRED_DISP pid=6034 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:05.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.33:22-10.0.0.1:36482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:23:05.240716 systemd[1]: sshd@21-10.0.0.33:22-10.0.0.1:36482.service: Deactivated successfully. Jul 11 00:23:05.242263 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:23:05.242703 systemd-logind[1302]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:23:05.243522 systemd-logind[1302]: Removed session 22. Jul 11 00:23:05.881375 kubelet[2117]: E0711 00:23:05.881334 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:09.882826 kubelet[2117]: E0711 00:23:09.882786 2117 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:10.240829 systemd[1]: Started sshd@22-10.0.0.33:22-10.0.0.1:36490.service. Jul 11 00:23:10.244868 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 11 00:23:10.244956 kernel: audit: type=1130 audit(1752193390.240:567): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.33:22-10.0.0.1:36490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:23:10.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.33:22-10.0.0.1:36490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:23:10.281000 audit[6048]: USER_ACCT pid=6048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:10.283047 sshd[6048]: Accepted publickey for core from 10.0.0.1 port 36490 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:23:10.284267 sshd[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:23:10.283000 audit[6048]: CRED_ACQ pid=6048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:10.289416 kernel: audit: type=1101 audit(1752193390.281:568): pid=6048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:10.289471 kernel: audit: type=1103 audit(1752193390.283:569): pid=6048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:10.294411 systemd-logind[1302]: New session 23 of user core. Jul 11 00:23:10.295520 systemd[1]: Started session-23.scope. Jul 11 00:23:10.297588 kernel: audit: type=1006 audit(1752193390.283:570): pid=6048 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jul 11 00:23:10.297652 kernel: audit: type=1300 audit(1752193390.283:570): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff3de27c0 a2=3 a3=1 items=0 ppid=1 pid=6048 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:23:10.283000 audit[6048]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff3de27c0 a2=3 a3=1 items=0 ppid=1 pid=6048 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:23:10.283000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:23:10.301800 kernel: audit: type=1327 audit(1752193390.283:570): proctitle=737368643A20636F7265205B707269765D Jul 11 00:23:10.301860 kernel: audit: type=1105 audit(1752193390.300:571): pid=6048 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:10.300000 audit[6048]: USER_START pid=6048 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:10.304000 audit[6051]: CRED_ACQ pid=6051 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:10.308570 kernel: audit: type=1103 audit(1752193390.304:572): pid=6051 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:10.426479 sshd[6048]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:10.426000 audit[6048]: USER_END pid=6048 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:10.429203 systemd[1]: sshd@22-10.0.0.33:22-10.0.0.1:36490.service: Deactivated successfully. Jul 11 00:23:10.430174 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:23:10.427000 audit[6048]: CRED_DISP pid=6048 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:10.432159 systemd-logind[1302]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:23:10.435031 kernel: audit: type=1106 audit(1752193390.426:573): pid=6048 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:10.435096 kernel: audit: type=1104 audit(1752193390.427:574): pid=6048 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:10.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.33:22-10.0.0.1:36490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:23:10.435523 systemd-logind[1302]: Removed session 23. Jul 11 00:23:15.430368 systemd[1]: Started sshd@23-10.0.0.33:22-10.0.0.1:42902.service. Jul 11 00:23:15.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.33:22-10.0.0.1:42902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:23:15.431432 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 11 00:23:15.431497 kernel: audit: type=1130 audit(1752193395.429:576): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.33:22-10.0.0.1:42902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:23:15.470000 audit[6091]: USER_ACCT pid=6091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:15.471909 sshd[6091]: Accepted publickey for core from 10.0.0.1 port 42902 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:23:15.473330 sshd[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:23:15.472000 audit[6091]: CRED_ACQ pid=6091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:15.477820 kernel: audit: type=1101 audit(1752193395.470:577): pid=6091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:15.477911 kernel: audit: type=1103 audit(1752193395.472:578): pid=6091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:15.479823 kernel: audit: type=1006 audit(1752193395.472:579): pid=6091 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jul 11 00:23:15.472000 audit[6091]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd6959b60 a2=3 a3=1 items=0 ppid=1 pid=6091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:23:15.482143 systemd-logind[1302]: New session 24 of user core. Jul 11 00:23:15.482575 systemd[1]: Started session-24.scope. Jul 11 00:23:15.483336 kernel: audit: type=1300 audit(1752193395.472:579): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd6959b60 a2=3 a3=1 items=0 ppid=1 pid=6091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:23:15.472000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 11 00:23:15.484608 kernel: audit: type=1327 audit(1752193395.472:579): proctitle=737368643A20636F7265205B707269765D Jul 11 00:23:15.486000 audit[6091]: USER_START pid=6091 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:15.490000 audit[6094]: CRED_ACQ pid=6094 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:15.494019 kernel: audit: type=1105 audit(1752193395.486:580): pid=6091 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:15.494083 kernel: audit: type=1103 audit(1752193395.490:581): pid=6094 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:15.629964 sshd[6091]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:15.629000 audit[6091]: USER_END pid=6091 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:15.632915 systemd[1]: sshd@23-10.0.0.33:22-10.0.0.1:42902.service: Deactivated successfully. Jul 11 00:23:15.634087 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 00:23:15.634215 systemd-logind[1302]: Session 24 logged out. Waiting for processes to exit. Jul 11 00:23:15.630000 audit[6091]: CRED_DISP pid=6091 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:15.635324 systemd-logind[1302]: Removed session 24. Jul 11 00:23:15.637527 kernel: audit: type=1106 audit(1752193395.629:582): pid=6091 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:15.637596 kernel: audit: type=1104 audit(1752193395.630:583): pid=6091 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 11 00:23:15.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.33:22-10.0.0.1:42902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'