Jul 15 11:20:55.722728 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 15 11:20:55.722747 kernel: Linux version 5.15.188-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue Jul 15 10:06:30 -00 2025 Jul 15 11:20:55.722754 kernel: efi: EFI v2.70 by EDK II Jul 15 11:20:55.722760 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 15 11:20:55.722765 kernel: random: crng init done Jul 15 11:20:55.722770 kernel: ACPI: Early table checksum verification disabled Jul 15 11:20:55.722777 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 15 11:20:55.722784 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 15 11:20:55.722789 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:20:55.722795 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:20:55.722800 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:20:55.722806 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:20:55.722811 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:20:55.722816 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:20:55.722824 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:20:55.722830 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:20:55.722836 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:20:55.722858 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 15 11:20:55.722864 kernel: NUMA: Failed to initialise from firmware Jul 15 11:20:55.722870 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 11:20:55.722875 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Jul 15 11:20:55.722881 kernel: Zone ranges: Jul 15 11:20:55.722887 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 11:20:55.722893 kernel: DMA32 empty Jul 15 11:20:55.722899 kernel: Normal empty Jul 15 11:20:55.722905 kernel: Movable zone start for each node Jul 15 11:20:55.722910 kernel: Early memory node ranges Jul 15 11:20:55.722916 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 15 11:20:55.722922 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 15 11:20:55.722928 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 15 11:20:55.722933 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 15 11:20:55.722939 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 15 11:20:55.722945 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 15 11:20:55.722950 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 15 11:20:55.722956 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 11:20:55.722963 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 15 11:20:55.722969 kernel: psci: probing for conduit method from ACPI. Jul 15 11:20:55.722974 kernel: psci: PSCIv1.1 detected in firmware. Jul 15 11:20:55.722980 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 11:20:55.722986 kernel: psci: Trusted OS migration not required Jul 15 11:20:55.722994 kernel: psci: SMC Calling Convention v1.1 Jul 15 11:20:55.723001 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 15 11:20:55.723008 kernel: ACPI: SRAT not present Jul 15 11:20:55.723014 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 15 11:20:55.723021 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 15 11:20:55.723027 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 15 11:20:55.723033 kernel: Detected PIPT I-cache on CPU0 Jul 15 11:20:55.723039 kernel: CPU features: detected: GIC system register CPU interface Jul 15 11:20:55.723045 kernel: CPU features: detected: Hardware dirty bit management Jul 15 11:20:55.723051 kernel: CPU features: detected: Spectre-v4 Jul 15 11:20:55.723057 kernel: CPU features: detected: Spectre-BHB Jul 15 11:20:55.723064 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 15 11:20:55.723070 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 15 11:20:55.723076 kernel: CPU features: detected: ARM erratum 1418040 Jul 15 11:20:55.723082 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 15 11:20:55.723088 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 15 11:20:55.723094 kernel: Policy zone: DMA Jul 15 11:20:55.723101 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=66cb9a8d6ebbbd62ba3e197b019773f14f902d0ee05716ff2fc41a726e431e67 Jul 15 11:20:55.723108 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 11:20:55.723114 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 11:20:55.723120 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 11:20:55.723126 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 11:20:55.723134 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Jul 15 11:20:55.723140 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 11:20:55.723146 kernel: trace event string verifier disabled Jul 15 11:20:55.723152 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 11:20:55.723159 kernel: rcu: RCU event tracing is enabled. Jul 15 11:20:55.723165 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 11:20:55.723171 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 11:20:55.723178 kernel: Tracing variant of Tasks RCU enabled. Jul 15 11:20:55.723184 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 11:20:55.723190 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 11:20:55.723196 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 11:20:55.723203 kernel: GICv3: 256 SPIs implemented Jul 15 11:20:55.723210 kernel: GICv3: 0 Extended SPIs implemented Jul 15 11:20:55.723216 kernel: GICv3: Distributor has no Range Selector support Jul 15 11:20:55.723222 kernel: Root IRQ handler: gic_handle_irq Jul 15 11:20:55.723228 kernel: GICv3: 16 PPIs implemented Jul 15 11:20:55.723234 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 15 11:20:55.723240 kernel: ACPI: SRAT not present Jul 15 11:20:55.723246 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 15 11:20:55.723252 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 15 11:20:55.723259 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 15 11:20:55.723265 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 15 11:20:55.723271 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 15 11:20:55.723279 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 11:20:55.723285 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 15 11:20:55.723291 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 15 11:20:55.723297 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 15 11:20:55.723303 kernel: arm-pv: using stolen time PV Jul 15 11:20:55.723310 kernel: Console: colour dummy device 80x25 Jul 15 11:20:55.723316 kernel: ACPI: Core revision 20210730 Jul 15 11:20:55.723323 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 15 11:20:55.723329 kernel: pid_max: default: 32768 minimum: 301 Jul 15 11:20:55.723336 kernel: LSM: Security Framework initializing Jul 15 11:20:55.723343 kernel: SELinux: Initializing. Jul 15 11:20:55.723349 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 11:20:55.723356 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 11:20:55.723362 kernel: rcu: Hierarchical SRCU implementation. Jul 15 11:20:55.723368 kernel: Platform MSI: ITS@0x8080000 domain created Jul 15 11:20:55.723374 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 15 11:20:55.723380 kernel: Remapping and enabling EFI services. Jul 15 11:20:55.723386 kernel: smp: Bringing up secondary CPUs ... Jul 15 11:20:55.723393 kernel: Detected PIPT I-cache on CPU1 Jul 15 11:20:55.723401 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 15 11:20:55.723414 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 15 11:20:55.723420 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 11:20:55.723427 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 15 11:20:55.723433 kernel: Detected PIPT I-cache on CPU2 Jul 15 11:20:55.723439 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 15 11:20:55.723446 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 15 11:20:55.723452 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 11:20:55.723458 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 15 11:20:55.723465 kernel: Detected PIPT I-cache on CPU3 Jul 15 11:20:55.723472 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 15 11:20:55.723478 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 15 11:20:55.723485 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 11:20:55.723491 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 15 11:20:55.723501 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 11:20:55.723509 kernel: SMP: Total of 4 processors activated. Jul 15 11:20:55.723515 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 11:20:55.723522 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 15 11:20:55.723529 kernel: CPU features: detected: Common not Private translations Jul 15 11:20:55.723535 kernel: CPU features: detected: CRC32 instructions Jul 15 11:20:55.723542 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 15 11:20:55.723548 kernel: CPU features: detected: LSE atomic instructions Jul 15 11:20:55.723556 kernel: CPU features: detected: Privileged Access Never Jul 15 11:20:55.723563 kernel: CPU features: detected: RAS Extension Support Jul 15 11:20:55.723569 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 15 11:20:55.723576 kernel: CPU: All CPU(s) started at EL1 Jul 15 11:20:55.723582 kernel: alternatives: patching kernel code Jul 15 11:20:55.723590 kernel: devtmpfs: initialized Jul 15 11:20:55.723596 kernel: KASLR enabled Jul 15 11:20:55.723603 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 11:20:55.723610 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 11:20:55.723616 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 11:20:55.723623 kernel: SMBIOS 3.0.0 present. Jul 15 11:20:55.723629 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 15 11:20:55.723636 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 11:20:55.723642 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 11:20:55.723650 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 11:20:55.723657 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 11:20:55.723664 kernel: audit: initializing netlink subsys (disabled) Jul 15 11:20:55.723671 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Jul 15 11:20:55.723677 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 11:20:55.723683 kernel: cpuidle: using governor menu Jul 15 11:20:55.723690 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 11:20:55.723697 kernel: ASID allocator initialised with 32768 entries Jul 15 11:20:55.723703 kernel: ACPI: bus type PCI registered Jul 15 11:20:55.723711 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 11:20:55.723718 kernel: Serial: AMBA PL011 UART driver Jul 15 11:20:55.723724 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 11:20:55.723731 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 11:20:55.723737 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 11:20:55.723744 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 11:20:55.723751 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 11:20:55.723757 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 11:20:55.723764 kernel: ACPI: Added _OSI(Module Device) Jul 15 11:20:55.723773 kernel: ACPI: Added _OSI(Processor Device) Jul 15 11:20:55.723779 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 11:20:55.723786 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 15 11:20:55.723792 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 15 11:20:55.723799 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 15 11:20:55.723805 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 11:20:55.723812 kernel: ACPI: Interpreter enabled Jul 15 11:20:55.723832 kernel: ACPI: Using GIC for interrupt routing Jul 15 11:20:55.723843 kernel: ACPI: MCFG table detected, 1 entries Jul 15 11:20:55.723853 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 15 11:20:55.723860 kernel: printk: console [ttyAMA0] enabled Jul 15 11:20:55.723867 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 11:20:55.726126 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 11:20:55.726210 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 15 11:20:55.726269 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 15 11:20:55.726327 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 15 11:20:55.726389 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 15 11:20:55.726398 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 15 11:20:55.726416 kernel: PCI host bridge to bus 0000:00 Jul 15 11:20:55.726490 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 15 11:20:55.726554 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 15 11:20:55.726610 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 15 11:20:55.726661 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 11:20:55.726734 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 15 11:20:55.726802 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 15 11:20:55.726874 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 15 11:20:55.726934 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 15 11:20:55.726993 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 15 11:20:55.727051 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 15 11:20:55.727109 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 15 11:20:55.727169 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 15 11:20:55.727222 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 15 11:20:55.727273 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 15 11:20:55.727325 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 15 11:20:55.727333 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 15 11:20:55.727340 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 15 11:20:55.727347 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 15 11:20:55.727355 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 15 11:20:55.727362 kernel: iommu: Default domain type: Translated Jul 15 11:20:55.727368 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 11:20:55.727375 kernel: vgaarb: loaded Jul 15 11:20:55.727382 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 15 11:20:55.727388 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 15 11:20:55.727395 kernel: PTP clock support registered Jul 15 11:20:55.727401 kernel: Registered efivars operations Jul 15 11:20:55.727415 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 11:20:55.727422 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 11:20:55.727430 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 11:20:55.727436 kernel: pnp: PnP ACPI init Jul 15 11:20:55.727508 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 15 11:20:55.727517 kernel: pnp: PnP ACPI: found 1 devices Jul 15 11:20:55.727524 kernel: NET: Registered PF_INET protocol family Jul 15 11:20:55.727531 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 11:20:55.727538 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 11:20:55.727545 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 11:20:55.727553 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 11:20:55.727559 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 15 11:20:55.727566 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 11:20:55.727573 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 11:20:55.727579 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 11:20:55.727586 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 11:20:55.727592 kernel: PCI: CLS 0 bytes, default 64 Jul 15 11:20:55.727599 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 15 11:20:55.727605 kernel: kvm [1]: HYP mode not available Jul 15 11:20:55.727613 kernel: Initialise system trusted keyrings Jul 15 11:20:55.727620 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 11:20:55.727627 kernel: Key type asymmetric registered Jul 15 11:20:55.727633 kernel: Asymmetric key parser 'x509' registered Jul 15 11:20:55.727640 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 11:20:55.727646 kernel: io scheduler mq-deadline registered Jul 15 11:20:55.727653 kernel: io scheduler kyber registered Jul 15 11:20:55.727659 kernel: io scheduler bfq registered Jul 15 11:20:55.727666 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 15 11:20:55.727674 kernel: ACPI: button: Power Button [PWRB] Jul 15 11:20:55.727681 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 15 11:20:55.727740 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 15 11:20:55.727749 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 11:20:55.727756 kernel: thunder_xcv, ver 1.0 Jul 15 11:20:55.727763 kernel: thunder_bgx, ver 1.0 Jul 15 11:20:55.727769 kernel: nicpf, ver 1.0 Jul 15 11:20:55.727775 kernel: nicvf, ver 1.0 Jul 15 11:20:55.727847 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 11:20:55.727908 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T11:20:55 UTC (1752578455) Jul 15 11:20:55.727917 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 11:20:55.727924 kernel: NET: Registered PF_INET6 protocol family Jul 15 11:20:55.727931 kernel: Segment Routing with IPv6 Jul 15 11:20:55.727937 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 11:20:55.727944 kernel: NET: Registered PF_PACKET protocol family Jul 15 11:20:55.727950 kernel: Key type dns_resolver registered Jul 15 11:20:55.727957 kernel: registered taskstats version 1 Jul 15 11:20:55.727965 kernel: Loading compiled-in X.509 certificates Jul 15 11:20:55.727971 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.188-flatcar: 1835a6fea2ba29f82433ea6fde09cb345fc75fe9' Jul 15 11:20:55.727978 kernel: Key type .fscrypt registered Jul 15 11:20:55.727984 kernel: Key type fscrypt-provisioning registered Jul 15 11:20:55.727991 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 11:20:55.727997 kernel: ima: Allocated hash algorithm: sha1 Jul 15 11:20:55.728004 kernel: ima: No architecture policies found Jul 15 11:20:55.728010 kernel: clk: Disabling unused clocks Jul 15 11:20:55.728017 kernel: Freeing unused kernel memory: 36416K Jul 15 11:20:55.728024 kernel: Run /init as init process Jul 15 11:20:55.728031 kernel: with arguments: Jul 15 11:20:55.728037 kernel: /init Jul 15 11:20:55.728044 kernel: with environment: Jul 15 11:20:55.728050 kernel: HOME=/ Jul 15 11:20:55.728056 kernel: TERM=linux Jul 15 11:20:55.728063 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 11:20:55.728071 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:20:55.728081 systemd[1]: Detected virtualization kvm. Jul 15 11:20:55.728088 systemd[1]: Detected architecture arm64. Jul 15 11:20:55.728095 systemd[1]: Running in initrd. Jul 15 11:20:55.728102 systemd[1]: No hostname configured, using default hostname. Jul 15 11:20:55.728108 systemd[1]: Hostname set to . Jul 15 11:20:55.728116 systemd[1]: Initializing machine ID from VM UUID. Jul 15 11:20:55.728122 systemd[1]: Queued start job for default target initrd.target. Jul 15 11:20:55.728129 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:20:55.728137 systemd[1]: Reached target cryptsetup.target. Jul 15 11:20:55.728144 systemd[1]: Reached target paths.target. Jul 15 11:20:55.728151 systemd[1]: Reached target slices.target. Jul 15 11:20:55.728158 systemd[1]: Reached target swap.target. Jul 15 11:20:55.728164 systemd[1]: Reached target timers.target. Jul 15 11:20:55.728172 systemd[1]: Listening on iscsid.socket. Jul 15 11:20:55.728179 systemd[1]: Listening on iscsiuio.socket. Jul 15 11:20:55.728187 systemd[1]: Listening on systemd-journald-audit.socket. Jul 15 11:20:55.728194 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 15 11:20:55.728201 systemd[1]: Listening on systemd-journald.socket. Jul 15 11:20:55.728207 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:20:55.728214 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:20:55.728221 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:20:55.728228 systemd[1]: Reached target sockets.target. Jul 15 11:20:55.728235 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:20:55.728287 systemd[1]: Finished network-cleanup.service. Jul 15 11:20:55.728311 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 11:20:55.728318 systemd[1]: Starting systemd-journald.service... Jul 15 11:20:55.728328 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:20:55.728336 systemd[1]: Starting systemd-resolved.service... Jul 15 11:20:55.728343 systemd[1]: Starting systemd-vconsole-setup.service... Jul 15 11:20:55.728350 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:20:55.728357 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 11:20:55.728364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 15 11:20:55.728371 systemd[1]: Finished systemd-vconsole-setup.service. Jul 15 11:20:55.728379 systemd[1]: Starting dracut-cmdline-ask.service... Jul 15 11:20:55.728386 kernel: audit: type=1130 audit(1752578455.725:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:55.728396 systemd-journald[291]: Journal started Jul 15 11:20:55.728446 systemd-journald[291]: Runtime Journal (/run/log/journal/c9b3efa080064d1b8f2a437ff12c84eb) is 6.0M, max 48.7M, 42.6M free. Jul 15 11:20:55.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:55.719075 systemd-modules-load[292]: Inserted module 'overlay' Jul 15 11:20:55.729834 systemd[1]: Started systemd-journald.service. Jul 15 11:20:55.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:55.730723 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 15 11:20:55.735391 kernel: audit: type=1130 audit(1752578455.730:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:55.735416 kernel: audit: type=1130 audit(1752578455.731:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:55.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:55.745628 systemd[1]: Finished dracut-cmdline-ask.service. Jul 15 11:20:55.749566 kernel: audit: type=1130 audit(1752578455.746:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:55.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:55.746983 systemd[1]: Starting dracut-cmdline.service... Jul 15 11:20:55.750513 systemd-resolved[293]: Positive Trust Anchors: Jul 15 11:20:55.750521 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:20:55.750549 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:20:55.755794 systemd-resolved[293]: Defaulting to hostname 'linux'. Jul 15 11:20:55.758683 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 11:20:55.757007 systemd[1]: Started systemd-resolved.service. Jul 15 11:20:55.763116 kernel: audit: type=1130 audit(1752578455.759:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:55.763133 kernel: Bridge firewalling registered Jul 15 11:20:55.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:55.760017 systemd[1]: Reached target nss-lookup.target. Jul 15 11:20:55.763072 systemd-modules-load[292]: Inserted module 'br_netfilter' Jul 15 11:20:55.764355 dracut-cmdline[309]: dracut-dracut-053 Jul 15 11:20:55.765640 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=66cb9a8d6ebbbd62ba3e197b019773f14f902d0ee05716ff2fc41a726e431e67 Jul 15 11:20:55.774861 kernel: SCSI subsystem initialized Jul 15 11:20:55.781096 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 11:20:55.781115 kernel: device-mapper: uevent: version 1.0.3 Jul 15 11:20:55.781865 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 15 11:20:55.784095 systemd-modules-load[292]: Inserted module 'dm_multipath' Jul 15 11:20:55.793936 kernel: audit: type=1130 audit(1752578455.785:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:55.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:55.785211 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:20:55.786601 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:20:55.795060 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:20:55.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:55.798865 kernel: audit: type=1130 audit(1752578455.796:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:55.830870 kernel: Loading iSCSI transport class v2.0-870. Jul 15 11:20:55.842861 kernel: iscsi: registered transport (tcp) Jul 15 11:20:55.857860 kernel: iscsi: registered transport (qla4xxx) Jul 15 11:20:55.857876 kernel: QLogic iSCSI HBA Driver Jul 15 11:20:55.894181 systemd[1]: Finished dracut-cmdline.service. Jul 15 11:20:55.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:55.895649 systemd[1]: Starting dracut-pre-udev.service... Jul 15 11:20:55.897902 kernel: audit: type=1130 audit(1752578455.893:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:55.941868 kernel: raid6: neonx8 gen() 13743 MB/s Jul 15 11:20:55.958857 kernel: raid6: neonx8 xor() 10768 MB/s Jul 15 11:20:55.975855 kernel: raid6: neonx4 gen() 13470 MB/s Jul 15 11:20:55.992853 kernel: raid6: neonx4 xor() 11127 MB/s Jul 15 11:20:56.009871 kernel: raid6: neonx2 gen() 12918 MB/s Jul 15 11:20:56.026857 kernel: raid6: neonx2 xor() 10367 MB/s Jul 15 11:20:56.043854 kernel: raid6: neonx1 gen() 10558 MB/s Jul 15 11:20:56.060853 kernel: raid6: neonx1 xor() 8764 MB/s Jul 15 11:20:56.077852 kernel: raid6: int64x8 gen() 6269 MB/s Jul 15 11:20:56.094854 kernel: raid6: int64x8 xor() 3536 MB/s Jul 15 11:20:56.111855 kernel: raid6: int64x4 gen() 7212 MB/s Jul 15 11:20:56.128851 kernel: raid6: int64x4 xor() 3839 MB/s Jul 15 11:20:56.145856 kernel: raid6: int64x2 gen() 6099 MB/s Jul 15 11:20:56.162853 kernel: raid6: int64x2 xor() 3300 MB/s Jul 15 11:20:56.179853 kernel: raid6: int64x1 gen() 5011 MB/s Jul 15 11:20:56.197037 kernel: raid6: int64x1 xor() 2628 MB/s Jul 15 11:20:56.197052 kernel: raid6: using algorithm neonx8 gen() 13743 MB/s Jul 15 11:20:56.197061 kernel: raid6: .... xor() 10768 MB/s, rmw enabled Jul 15 11:20:56.197069 kernel: raid6: using neon recovery algorithm Jul 15 11:20:56.207860 kernel: xor: measuring software checksum speed Jul 15 11:20:56.207877 kernel: 8regs : 16722 MB/sec Jul 15 11:20:56.209296 kernel: 32regs : 19529 MB/sec Jul 15 11:20:56.209310 kernel: arm64_neon : 26272 MB/sec Jul 15 11:20:56.209319 kernel: xor: using function: arm64_neon (26272 MB/sec) Jul 15 11:20:56.263864 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 15 11:20:56.274032 systemd[1]: Finished dracut-pre-udev.service. Jul 15 11:20:56.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:56.276879 kernel: audit: type=1130 audit(1752578456.273:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:56.276000 audit: BPF prog-id=7 op=LOAD Jul 15 11:20:56.276000 audit: BPF prog-id=8 op=LOAD Jul 15 11:20:56.277257 systemd[1]: Starting systemd-udevd.service... Jul 15 11:20:56.289495 systemd-udevd[492]: Using default interface naming scheme 'v252'. Jul 15 11:20:56.292804 systemd[1]: Started systemd-udevd.service. Jul 15 11:20:56.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:56.294588 systemd[1]: Starting dracut-pre-trigger.service... Jul 15 11:20:56.303818 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Jul 15 11:20:56.328851 systemd[1]: Finished dracut-pre-trigger.service. Jul 15 11:20:56.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:56.330146 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:20:56.367008 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:20:56.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:56.390313 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 11:20:56.395483 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 11:20:56.395499 kernel: GPT:9289727 != 19775487 Jul 15 11:20:56.395508 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 11:20:56.395517 kernel: GPT:9289727 != 19775487 Jul 15 11:20:56.395525 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 11:20:56.395534 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:20:56.411272 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 15 11:20:56.414135 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 15 11:20:56.414904 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 15 11:20:56.419857 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (539) Jul 15 11:20:56.421919 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 15 11:20:56.425509 systemd[1]: Starting disk-uuid.service... Jul 15 11:20:56.428899 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:20:56.431133 disk-uuid[564]: Primary Header is updated. Jul 15 11:20:56.431133 disk-uuid[564]: Secondary Entries is updated. Jul 15 11:20:56.431133 disk-uuid[564]: Secondary Header is updated. Jul 15 11:20:56.434859 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:20:57.451863 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:20:57.451989 disk-uuid[565]: The operation has completed successfully. Jul 15 11:20:57.473896 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 11:20:57.474812 systemd[1]: Finished disk-uuid.service. Jul 15 11:20:57.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:57.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:57.479442 systemd[1]: Starting verity-setup.service... Jul 15 11:20:57.496166 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 15 11:20:57.517307 systemd[1]: Found device dev-mapper-usr.device. Jul 15 11:20:57.519426 systemd[1]: Mounting sysusr-usr.mount... Jul 15 11:20:57.521108 systemd[1]: Finished verity-setup.service. Jul 15 11:20:57.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:57.565728 systemd[1]: Mounted sysusr-usr.mount. Jul 15 11:20:57.566728 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 15 11:20:57.566419 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 15 11:20:57.567139 systemd[1]: Starting ignition-setup.service... Jul 15 11:20:57.568784 systemd[1]: Starting parse-ip-for-networkd.service... Jul 15 11:20:57.576209 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 11:20:57.576243 kernel: BTRFS info (device vda6): using free space tree Jul 15 11:20:57.576253 kernel: BTRFS info (device vda6): has skinny extents Jul 15 11:20:57.584279 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 15 11:20:57.591313 systemd[1]: Finished ignition-setup.service. Jul 15 11:20:57.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:57.592721 systemd[1]: Starting ignition-fetch-offline.service... Jul 15 11:20:57.649385 systemd[1]: Finished parse-ip-for-networkd.service. Jul 15 11:20:57.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:57.649000 audit: BPF prog-id=9 op=LOAD Jul 15 11:20:57.651372 systemd[1]: Starting systemd-networkd.service... Jul 15 11:20:57.674409 ignition[649]: Ignition 2.14.0 Jul 15 11:20:57.674420 ignition[649]: Stage: fetch-offline Jul 15 11:20:57.674469 ignition[649]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:20:57.674481 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:20:57.674610 ignition[649]: parsed url from cmdline: "" Jul 15 11:20:57.674613 ignition[649]: no config URL provided Jul 15 11:20:57.674618 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 11:20:57.674626 ignition[649]: no config at "/usr/lib/ignition/user.ign" Jul 15 11:20:57.674643 ignition[649]: op(1): [started] loading QEMU firmware config module Jul 15 11:20:57.679159 systemd-networkd[739]: lo: Link UP Jul 15 11:20:57.674648 ignition[649]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 11:20:57.679163 systemd-networkd[739]: lo: Gained carrier Jul 15 11:20:57.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:57.679773 systemd-networkd[739]: Enumeration completed Jul 15 11:20:57.680038 systemd[1]: Started systemd-networkd.service. Jul 15 11:20:57.681052 systemd[1]: Reached target network.target. Jul 15 11:20:57.683153 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 11:20:57.684337 systemd[1]: Starting iscsiuio.service... Jul 15 11:20:57.685272 systemd-networkd[739]: eth0: Link UP Jul 15 11:20:57.685275 systemd-networkd[739]: eth0: Gained carrier Jul 15 11:20:57.686264 ignition[649]: op(1): [finished] loading QEMU firmware config module Jul 15 11:20:57.686286 ignition[649]: QEMU firmware config was not found. Ignoring... Jul 15 11:20:57.693024 systemd[1]: Started iscsiuio.service. Jul 15 11:20:57.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:57.694599 systemd[1]: Starting iscsid.service... Jul 15 11:20:57.697740 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:20:57.697740 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 15 11:20:57.697740 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 15 11:20:57.697740 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 15 11:20:57.697740 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:20:57.697740 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 15 11:20:57.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:57.701329 systemd[1]: Started iscsid.service. Jul 15 11:20:57.704258 systemd[1]: Starting dracut-initqueue.service... Jul 15 11:20:57.709941 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 11:20:57.714992 systemd[1]: Finished dracut-initqueue.service. Jul 15 11:20:57.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:57.715784 systemd[1]: Reached target remote-fs-pre.target. Jul 15 11:20:57.716989 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:20:57.718281 systemd[1]: Reached target remote-fs.target. Jul 15 11:20:57.720226 systemd[1]: Starting dracut-pre-mount.service... Jul 15 11:20:57.727494 systemd[1]: Finished dracut-pre-mount.service. Jul 15 11:20:57.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:57.743420 ignition[649]: parsing config with SHA512: 8160bd8ab8e2288f568366ac689637b35076233b582d516a0960b10f57a922edc94b4dbb7e4c570e2b1a19df6b7c550fca403be1f8323fea2ba197da89c2b32e Jul 15 11:20:57.750071 unknown[649]: fetched base config from "system" Jul 15 11:20:57.750864 unknown[649]: fetched user config from "qemu" Jul 15 11:20:57.751958 ignition[649]: fetch-offline: fetch-offline passed Jul 15 11:20:57.752650 ignition[649]: Ignition finished successfully Jul 15 11:20:57.754071 systemd[1]: Finished ignition-fetch-offline.service. Jul 15 11:20:57.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:57.754780 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 11:20:57.755522 systemd[1]: Starting ignition-kargs.service... Jul 15 11:20:57.763627 ignition[761]: Ignition 2.14.0 Jul 15 11:20:57.763642 ignition[761]: Stage: kargs Jul 15 11:20:57.763732 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:20:57.763741 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:20:57.764658 ignition[761]: kargs: kargs passed Jul 15 11:20:57.764700 ignition[761]: Ignition finished successfully Jul 15 11:20:57.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:57.766924 systemd[1]: Finished ignition-kargs.service. Jul 15 11:20:57.768573 systemd[1]: Starting ignition-disks.service... Jul 15 11:20:57.774992 ignition[767]: Ignition 2.14.0 Jul 15 11:20:57.775002 ignition[767]: Stage: disks Jul 15 11:20:57.775096 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:20:57.775106 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:20:57.778072 systemd[1]: Finished ignition-disks.service. Jul 15 11:20:57.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:57.776060 ignition[767]: disks: disks passed Jul 15 11:20:57.778800 systemd[1]: Reached target initrd-root-device.target. Jul 15 11:20:57.776109 ignition[767]: Ignition finished successfully Jul 15 11:20:57.779741 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:20:57.780635 systemd[1]: Reached target local-fs.target. Jul 15 11:20:57.781633 systemd[1]: Reached target sysinit.target. Jul 15 11:20:57.782565 systemd[1]: Reached target basic.target. Jul 15 11:20:57.784370 systemd[1]: Starting systemd-fsck-root.service... Jul 15 11:20:57.794904 systemd-fsck[775]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 15 11:20:57.798623 systemd[1]: Finished systemd-fsck-root.service. Jul 15 11:20:57.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:57.800239 systemd[1]: Mounting sysroot.mount... Jul 15 11:20:57.806889 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 15 11:20:57.807267 systemd[1]: Mounted sysroot.mount. Jul 15 11:20:57.807871 systemd[1]: Reached target initrd-root-fs.target. Jul 15 11:20:57.809719 systemd[1]: Mounting sysroot-usr.mount... Jul 15 11:20:57.810493 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 15 11:20:57.810531 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 11:20:57.810557 systemd[1]: Reached target ignition-diskful.target. Jul 15 11:20:57.812548 systemd[1]: Mounted sysroot-usr.mount. Jul 15 11:20:57.814516 systemd[1]: Starting initrd-setup-root.service... Jul 15 11:20:57.818669 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 11:20:57.822811 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Jul 15 11:20:57.826547 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 11:20:57.830277 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 11:20:57.855673 systemd[1]: Finished initrd-setup-root.service. Jul 15 11:20:57.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:57.857040 systemd[1]: Starting ignition-mount.service... Jul 15 11:20:57.858173 systemd[1]: Starting sysroot-boot.service... Jul 15 11:20:57.862523 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Jul 15 11:20:57.870793 ignition[828]: INFO : Ignition 2.14.0 Jul 15 11:20:57.870793 ignition[828]: INFO : Stage: mount Jul 15 11:20:57.871996 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:20:57.871996 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:20:57.871996 ignition[828]: INFO : mount: mount passed Jul 15 11:20:57.871996 ignition[828]: INFO : Ignition finished successfully Jul 15 11:20:57.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:57.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:57.872584 systemd[1]: Finished ignition-mount.service. Jul 15 11:20:57.874301 systemd[1]: Finished sysroot-boot.service. Jul 15 11:20:58.527563 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 15 11:20:58.534282 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (837) Jul 15 11:20:58.534315 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 11:20:58.534325 kernel: BTRFS info (device vda6): using free space tree Jul 15 11:20:58.535257 kernel: BTRFS info (device vda6): has skinny extents Jul 15 11:20:58.538349 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 15 11:20:58.539725 systemd[1]: Starting ignition-files.service... Jul 15 11:20:58.552925 ignition[857]: INFO : Ignition 2.14.0 Jul 15 11:20:58.552925 ignition[857]: INFO : Stage: files Jul 15 11:20:58.554090 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:20:58.554090 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:20:58.554090 ignition[857]: DEBUG : files: compiled without relabeling support, skipping Jul 15 11:20:58.560019 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 11:20:58.560019 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 11:20:58.562186 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 11:20:58.562186 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 11:20:58.564124 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 11:20:58.564124 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 15 11:20:58.564124 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 15 11:20:58.564124 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 15 11:20:58.564124 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 15 11:20:58.562336 unknown[857]: wrote ssh authorized keys file for user: core Jul 15 11:20:58.597087 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 11:20:58.740873 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 15 11:20:58.742323 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 11:20:58.742323 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 11:20:58.742323 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:20:58.742323 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:20:58.742323 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:20:58.742323 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:20:58.742323 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:20:58.742323 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:20:58.742323 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:20:58.742323 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:20:58.742323 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 11:20:58.742323 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 11:20:58.742323 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 11:20:58.742323 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 15 11:20:58.941068 systemd-networkd[739]: eth0: Gained IPv6LL Jul 15 11:20:59.167459 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 11:20:59.372976 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 11:20:59.374470 ignition[857]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 15 11:20:59.375383 ignition[857]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 15 11:20:59.375383 ignition[857]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 15 11:20:59.375383 ignition[857]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 15 11:20:59.375383 ignition[857]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 15 11:20:59.375383 ignition[857]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:20:59.375383 ignition[857]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:20:59.375383 ignition[857]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 15 11:20:59.375383 ignition[857]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 15 11:20:59.375383 ignition[857]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:20:59.375383 ignition[857]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:20:59.375383 ignition[857]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 15 11:20:59.375383 ignition[857]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 11:20:59.375383 ignition[857]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:20:59.416938 ignition[857]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:20:59.417986 ignition[857]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 11:20:59.417986 ignition[857]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jul 15 11:20:59.417986 ignition[857]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 11:20:59.417986 ignition[857]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:20:59.417986 ignition[857]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:20:59.417986 ignition[857]: INFO : files: files passed Jul 15 11:20:59.417986 ignition[857]: INFO : Ignition finished successfully Jul 15 11:20:59.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.418377 systemd[1]: Finished ignition-files.service. Jul 15 11:20:59.420589 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 15 11:20:59.428553 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 15 11:20:59.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.421315 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 15 11:20:59.431846 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 11:20:59.421951 systemd[1]: Starting ignition-quench.service... Jul 15 11:20:59.425274 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 11:20:59.425351 systemd[1]: Finished ignition-quench.service. Jul 15 11:20:59.427975 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 15 11:20:59.429270 systemd[1]: Reached target ignition-complete.target. Jul 15 11:20:59.431187 systemd[1]: Starting initrd-parse-etc.service... Jul 15 11:20:59.442763 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 11:20:59.442849 systemd[1]: Finished initrd-parse-etc.service. Jul 15 11:20:59.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.444162 systemd[1]: Reached target initrd-fs.target. Jul 15 11:20:59.445158 systemd[1]: Reached target initrd.target. Jul 15 11:20:59.446117 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 15 11:20:59.446793 systemd[1]: Starting dracut-pre-pivot.service... Jul 15 11:20:59.456905 systemd[1]: Finished dracut-pre-pivot.service. Jul 15 11:20:59.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.458181 systemd[1]: Starting initrd-cleanup.service... Jul 15 11:20:59.465607 systemd[1]: Stopped target nss-lookup.target. Jul 15 11:20:59.466297 systemd[1]: Stopped target remote-cryptsetup.target. Jul 15 11:20:59.467347 systemd[1]: Stopped target timers.target. Jul 15 11:20:59.468353 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 11:20:59.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.468470 systemd[1]: Stopped dracut-pre-pivot.service. Jul 15 11:20:59.469477 systemd[1]: Stopped target initrd.target. Jul 15 11:20:59.470418 systemd[1]: Stopped target basic.target. Jul 15 11:20:59.471348 systemd[1]: Stopped target ignition-complete.target. Jul 15 11:20:59.472371 systemd[1]: Stopped target ignition-diskful.target. Jul 15 11:20:59.473482 systemd[1]: Stopped target initrd-root-device.target. Jul 15 11:20:59.474521 systemd[1]: Stopped target remote-fs.target. Jul 15 11:20:59.475498 systemd[1]: Stopped target remote-fs-pre.target. Jul 15 11:20:59.476521 systemd[1]: Stopped target sysinit.target. Jul 15 11:20:59.477443 systemd[1]: Stopped target local-fs.target. Jul 15 11:20:59.478499 systemd[1]: Stopped target local-fs-pre.target. Jul 15 11:20:59.479453 systemd[1]: Stopped target swap.target. Jul 15 11:20:59.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.480348 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 11:20:59.480463 systemd[1]: Stopped dracut-pre-mount.service. Jul 15 11:20:59.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.481480 systemd[1]: Stopped target cryptsetup.target. Jul 15 11:20:59.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.482300 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 11:20:59.482400 systemd[1]: Stopped dracut-initqueue.service. Jul 15 11:20:59.483475 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 11:20:59.483567 systemd[1]: Stopped ignition-fetch-offline.service. Jul 15 11:20:59.484532 systemd[1]: Stopped target paths.target. Jul 15 11:20:59.485410 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 11:20:59.489887 systemd[1]: Stopped systemd-ask-password-console.path. Jul 15 11:20:59.490602 systemd[1]: Stopped target slices.target. Jul 15 11:20:59.491591 systemd[1]: Stopped target sockets.target. Jul 15 11:20:59.492483 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 11:20:59.492556 systemd[1]: Closed iscsid.socket. Jul 15 11:20:59.493371 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 11:20:59.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.493445 systemd[1]: Closed iscsiuio.socket. Jul 15 11:20:59.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.494352 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 11:20:59.494460 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 15 11:20:59.495408 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 11:20:59.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.495500 systemd[1]: Stopped ignition-files.service. Jul 15 11:20:59.497252 systemd[1]: Stopping ignition-mount.service... Jul 15 11:20:59.498090 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 11:20:59.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.498213 systemd[1]: Stopped kmod-static-nodes.service. Jul 15 11:20:59.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.499977 systemd[1]: Stopping sysroot-boot.service... Jul 15 11:20:59.500831 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 11:20:59.504806 ignition[897]: INFO : Ignition 2.14.0 Jul 15 11:20:59.504806 ignition[897]: INFO : Stage: umount Jul 15 11:20:59.504806 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:20:59.504806 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:20:59.504806 ignition[897]: INFO : umount: umount passed Jul 15 11:20:59.504806 ignition[897]: INFO : Ignition finished successfully Jul 15 11:20:59.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.500972 systemd[1]: Stopped systemd-udev-trigger.service. Jul 15 11:20:59.501977 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 11:20:59.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.502073 systemd[1]: Stopped dracut-pre-trigger.service. Jul 15 11:20:59.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.506080 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 11:20:59.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.506162 systemd[1]: Finished initrd-cleanup.service. Jul 15 11:20:59.507387 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 11:20:59.507473 systemd[1]: Stopped ignition-mount.service. Jul 15 11:20:59.509044 systemd[1]: Stopped target network.target. Jul 15 11:20:59.509989 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 11:20:59.510037 systemd[1]: Stopped ignition-disks.service. Jul 15 11:20:59.511173 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 11:20:59.511209 systemd[1]: Stopped ignition-kargs.service. Jul 15 11:20:59.512457 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 11:20:59.512494 systemd[1]: Stopped ignition-setup.service. Jul 15 11:20:59.513852 systemd[1]: Stopping systemd-networkd.service... Jul 15 11:20:59.515057 systemd[1]: Stopping systemd-resolved.service... Jul 15 11:20:59.516731 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 11:20:59.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.520869 systemd-networkd[739]: eth0: DHCPv6 lease lost Jul 15 11:20:59.521912 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 11:20:59.522006 systemd[1]: Stopped systemd-networkd.service. Jul 15 11:20:59.523027 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 11:20:59.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.523060 systemd[1]: Closed systemd-networkd.socket. Jul 15 11:20:59.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.524606 systemd[1]: Stopping network-cleanup.service... Jul 15 11:20:59.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.530000 audit: BPF prog-id=9 op=UNLOAD Jul 15 11:20:59.526337 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 11:20:59.526403 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 15 11:20:59.527976 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 11:20:59.528019 systemd[1]: Stopped systemd-sysctl.service. Jul 15 11:20:59.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.529606 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 11:20:59.529648 systemd[1]: Stopped systemd-modules-load.service. Jul 15 11:20:59.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.530515 systemd[1]: Stopping systemd-udevd.service... Jul 15 11:20:59.535335 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 11:20:59.535819 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 11:20:59.535934 systemd[1]: Stopped systemd-resolved.service. Jul 15 11:20:59.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.538014 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 11:20:59.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.538136 systemd[1]: Stopped systemd-udevd.service. Jul 15 11:20:59.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.540133 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 11:20:59.546000 audit: BPF prog-id=6 op=UNLOAD Jul 15 11:20:59.540171 systemd[1]: Closed systemd-udevd-control.socket. Jul 15 11:20:59.540932 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 11:20:59.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.540962 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 15 11:20:59.542325 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 11:20:59.542366 systemd[1]: Stopped dracut-pre-udev.service. Jul 15 11:20:59.543600 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 11:20:59.543637 systemd[1]: Stopped dracut-cmdline.service. Jul 15 11:20:59.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.544622 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 11:20:59.544659 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 15 11:20:59.546570 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 15 11:20:59.547653 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 11:20:59.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.547706 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 15 11:20:59.549490 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 11:20:59.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:20:59.549584 systemd[1]: Stopped network-cleanup.service. Jul 15 11:20:59.552031 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 11:20:59.552101 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 15 11:20:59.556246 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 11:20:59.556325 systemd[1]: Stopped sysroot-boot.service. Jul 15 11:20:59.557049 systemd[1]: Reached target initrd-switch-root.target. Jul 15 11:20:59.558201 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 11:20:59.558245 systemd[1]: Stopped initrd-setup-root.service. Jul 15 11:20:59.559994 systemd[1]: Starting initrd-switch-root.service... Jul 15 11:20:59.565812 systemd[1]: Switching root. Jul 15 11:20:59.566000 audit: BPF prog-id=8 op=UNLOAD Jul 15 11:20:59.566000 audit: BPF prog-id=7 op=UNLOAD Jul 15 11:20:59.567000 audit: BPF prog-id=5 op=UNLOAD Jul 15 11:20:59.567000 audit: BPF prog-id=4 op=UNLOAD Jul 15 11:20:59.567000 audit: BPF prog-id=3 op=UNLOAD Jul 15 11:20:59.584512 iscsid[745]: iscsid shutting down. Jul 15 11:20:59.585052 systemd-journald[291]: Received SIGTERM from PID 1 (n/a). Jul 15 11:20:59.585103 systemd-journald[291]: Journal stopped Jul 15 11:21:01.598005 kernel: SELinux: Class mctp_socket not defined in policy. Jul 15 11:21:01.598060 kernel: SELinux: Class anon_inode not defined in policy. Jul 15 11:21:01.598072 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 15 11:21:01.598082 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 11:21:01.598092 kernel: SELinux: policy capability open_perms=1 Jul 15 11:21:01.598102 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 11:21:01.598111 kernel: SELinux: policy capability always_check_network=0 Jul 15 11:21:01.598121 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 11:21:01.598133 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 11:21:01.598146 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 11:21:01.598156 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 11:21:01.598166 systemd[1]: Successfully loaded SELinux policy in 32.133ms. Jul 15 11:21:01.598185 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.975ms. Jul 15 11:21:01.598197 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:21:01.598209 systemd[1]: Detected virtualization kvm. Jul 15 11:21:01.598222 systemd[1]: Detected architecture arm64. Jul 15 11:21:01.598232 systemd[1]: Detected first boot. Jul 15 11:21:01.598242 systemd[1]: Initializing machine ID from VM UUID. Jul 15 11:21:01.598252 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 15 11:21:01.598263 systemd[1]: Populated /etc with preset unit settings. Jul 15 11:21:01.598273 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:21:01.598285 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:21:01.598296 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:21:01.598308 systemd[1]: Queued start job for default target multi-user.target. Jul 15 11:21:01.598320 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 15 11:21:01.598330 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 15 11:21:01.598340 systemd[1]: Created slice system-addon\x2drun.slice. Jul 15 11:21:01.598355 systemd[1]: Created slice system-getty.slice. Jul 15 11:21:01.598365 systemd[1]: Created slice system-modprobe.slice. Jul 15 11:21:01.598376 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 15 11:21:01.598398 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 15 11:21:01.598410 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 15 11:21:01.598424 systemd[1]: Created slice user.slice. Jul 15 11:21:01.598434 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:21:01.598445 systemd[1]: Started systemd-ask-password-wall.path. Jul 15 11:21:01.598455 systemd[1]: Set up automount boot.automount. Jul 15 11:21:01.598465 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 15 11:21:01.598476 systemd[1]: Reached target integritysetup.target. Jul 15 11:21:01.598487 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:21:01.598502 systemd[1]: Reached target remote-fs.target. Jul 15 11:21:01.598513 systemd[1]: Reached target slices.target. Jul 15 11:21:01.598524 systemd[1]: Reached target swap.target. Jul 15 11:21:01.598534 systemd[1]: Reached target torcx.target. Jul 15 11:21:01.598545 systemd[1]: Reached target veritysetup.target. Jul 15 11:21:01.598556 systemd[1]: Listening on systemd-coredump.socket. Jul 15 11:21:01.598566 systemd[1]: Listening on systemd-initctl.socket. Jul 15 11:21:01.598578 systemd[1]: Listening on systemd-journald-audit.socket. Jul 15 11:21:01.598588 kernel: kauditd_printk_skb: 78 callbacks suppressed Jul 15 11:21:01.598599 kernel: audit: type=1400 audit(1752578461.495:82): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 15 11:21:01.598611 kernel: audit: type=1335 audit(1752578461.495:83): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 15 11:21:01.598621 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 15 11:21:01.598632 systemd[1]: Listening on systemd-journald.socket. Jul 15 11:21:01.598642 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:21:01.598653 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:21:01.598663 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:21:01.598675 systemd[1]: Listening on systemd-userdbd.socket. Jul 15 11:21:01.598685 systemd[1]: Mounting dev-hugepages.mount... Jul 15 11:21:01.598696 systemd[1]: Mounting dev-mqueue.mount... Jul 15 11:21:01.598706 systemd[1]: Mounting media.mount... Jul 15 11:21:01.598716 systemd[1]: Mounting sys-kernel-debug.mount... Jul 15 11:21:01.598727 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 15 11:21:01.598737 systemd[1]: Mounting tmp.mount... Jul 15 11:21:01.598747 systemd[1]: Starting flatcar-tmpfiles.service... Jul 15 11:21:01.598757 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:21:01.598770 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:21:01.598780 systemd[1]: Starting modprobe@configfs.service... Jul 15 11:21:01.598791 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:21:01.598801 systemd[1]: Starting modprobe@drm.service... Jul 15 11:21:01.598811 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:21:01.598822 systemd[1]: Starting modprobe@fuse.service... Jul 15 11:21:01.598832 systemd[1]: Starting modprobe@loop.service... Jul 15 11:21:01.598856 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 11:21:01.598868 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 15 11:21:01.598879 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 15 11:21:01.598890 systemd[1]: Starting systemd-journald.service... Jul 15 11:21:01.598900 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:21:01.598910 systemd[1]: Starting systemd-network-generator.service... Jul 15 11:21:01.598921 systemd[1]: Starting systemd-remount-fs.service... Jul 15 11:21:01.598931 kernel: loop: module loaded Jul 15 11:21:01.598941 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:21:01.598951 systemd[1]: Mounted dev-hugepages.mount. Jul 15 11:21:01.598961 systemd[1]: Mounted dev-mqueue.mount. Jul 15 11:21:01.598973 systemd[1]: Mounted media.mount. Jul 15 11:21:01.598984 kernel: fuse: init (API version 7.34) Jul 15 11:21:01.598994 systemd[1]: Mounted sys-kernel-debug.mount. Jul 15 11:21:01.599004 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 15 11:21:01.599014 systemd[1]: Mounted tmp.mount. Jul 15 11:21:01.599024 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:21:01.599035 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 11:21:01.599045 kernel: audit: type=1130 audit(1752578461.569:84): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.599055 systemd[1]: Finished modprobe@configfs.service. Jul 15 11:21:01.599067 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:21:01.599078 kernel: audit: type=1130 audit(1752578461.573:85): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.599088 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:21:01.599099 kernel: audit: type=1131 audit(1752578461.573:86): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.599110 kernel: audit: type=1130 audit(1752578461.578:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.599120 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:21:01.599131 kernel: audit: type=1131 audit(1752578461.578:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.599142 systemd[1]: Finished modprobe@drm.service. Jul 15 11:21:01.599152 kernel: audit: type=1130 audit(1752578461.584:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.599162 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:21:01.599423 kernel: audit: type=1131 audit(1752578461.584:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.599443 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:21:01.599455 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 11:21:01.599466 kernel: audit: type=1130 audit(1752578461.589:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.599476 systemd[1]: Finished modprobe@fuse.service. Jul 15 11:21:01.599486 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:21:01.599499 systemd[1]: Finished modprobe@loop.service. Jul 15 11:21:01.599509 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:21:01.599519 systemd[1]: Finished systemd-network-generator.service. Jul 15 11:21:01.599532 systemd-journald[1027]: Journal started Jul 15 11:21:01.599581 systemd-journald[1027]: Runtime Journal (/run/log/journal/c9b3efa080064d1b8f2a437ff12c84eb) is 6.0M, max 48.7M, 42.6M free. Jul 15 11:21:01.495000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 15 11:21:01.495000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 15 11:21:01.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.600925 systemd[1]: Started systemd-journald.service. Jul 15 11:21:01.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.596000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 15 11:21:01.596000 audit[1027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=fffff52252d0 a2=4000 a3=1 items=0 ppid=1 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:01.596000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 15 11:21:01.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.601434 systemd[1]: Finished systemd-remount-fs.service. Jul 15 11:21:01.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.602983 systemd[1]: Reached target network-pre.target. Jul 15 11:21:01.604686 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 15 11:21:01.606500 systemd[1]: Mounting sys-kernel-config.mount... Jul 15 11:21:01.607183 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 11:21:01.608878 systemd[1]: Starting systemd-hwdb-update.service... Jul 15 11:21:01.610826 systemd[1]: Starting systemd-journal-flush.service... Jul 15 11:21:01.611496 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:21:01.612693 systemd[1]: Starting systemd-random-seed.service... Jul 15 11:21:01.613506 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:21:01.614712 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:21:01.621368 systemd[1]: Finished flatcar-tmpfiles.service. Jul 15 11:21:01.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.623638 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 15 11:21:01.624858 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:21:01.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.625760 systemd[1]: Mounted sys-kernel-config.mount. Jul 15 11:21:01.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.631698 systemd[1]: Finished systemd-random-seed.service. Jul 15 11:21:01.632725 systemd[1]: Reached target first-boot-complete.target. Jul 15 11:21:01.634663 systemd[1]: Starting systemd-sysusers.service... Jul 15 11:21:01.636456 systemd[1]: Starting systemd-udev-settle.service... Jul 15 11:21:01.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.642423 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:21:01.643671 systemd-journald[1027]: Time spent on flushing to /var/log/journal/c9b3efa080064d1b8f2a437ff12c84eb is 17.923ms for 935 entries. Jul 15 11:21:01.643671 systemd-journald[1027]: System Journal (/var/log/journal/c9b3efa080064d1b8f2a437ff12c84eb) is 8.0M, max 195.6M, 187.6M free. Jul 15 11:21:01.671446 systemd-journald[1027]: Received client request to flush runtime journal. Jul 15 11:21:01.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.657250 systemd[1]: Finished systemd-sysusers.service. Jul 15 11:21:01.671711 udevadm[1081]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 15 11:21:01.659187 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 15 11:21:01.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.673556 systemd[1]: Finished systemd-journal-flush.service. Jul 15 11:21:01.680898 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 15 11:21:01.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.973062 systemd[1]: Finished systemd-hwdb-update.service. Jul 15 11:21:01.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:01.974958 systemd[1]: Starting systemd-udevd.service... Jul 15 11:21:01.993973 systemd-udevd[1090]: Using default interface naming scheme 'v252'. Jul 15 11:21:02.004962 systemd[1]: Started systemd-udevd.service. Jul 15 11:21:02.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.007506 systemd[1]: Starting systemd-networkd.service... Jul 15 11:21:02.014935 systemd[1]: Starting systemd-userdbd.service... Jul 15 11:21:02.025610 systemd[1]: Found device dev-ttyAMA0.device. Jul 15 11:21:02.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.052917 systemd[1]: Started systemd-userdbd.service. Jul 15 11:21:02.056623 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:21:02.114226 systemd-networkd[1098]: lo: Link UP Jul 15 11:21:02.114240 systemd-networkd[1098]: lo: Gained carrier Jul 15 11:21:02.114639 systemd-networkd[1098]: Enumeration completed Jul 15 11:21:02.114753 systemd-networkd[1098]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 11:21:02.114764 systemd[1]: Started systemd-networkd.service. Jul 15 11:21:02.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.115940 systemd-networkd[1098]: eth0: Link UP Jul 15 11:21:02.115951 systemd-networkd[1098]: eth0: Gained carrier Jul 15 11:21:02.118305 systemd[1]: Finished systemd-udev-settle.service. Jul 15 11:21:02.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.120213 systemd[1]: Starting lvm2-activation-early.service... Jul 15 11:21:02.132109 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:21:02.140961 systemd-networkd[1098]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 11:21:02.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.154779 systemd[1]: Finished lvm2-activation-early.service. Jul 15 11:21:02.155633 systemd[1]: Reached target cryptsetup.target. Jul 15 11:21:02.157542 systemd[1]: Starting lvm2-activation.service... Jul 15 11:21:02.161196 lvm[1126]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:21:02.188828 systemd[1]: Finished lvm2-activation.service. Jul 15 11:21:02.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.189597 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:21:02.190269 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 11:21:02.190302 systemd[1]: Reached target local-fs.target. Jul 15 11:21:02.190873 systemd[1]: Reached target machines.target. Jul 15 11:21:02.192699 systemd[1]: Starting ldconfig.service... Jul 15 11:21:02.193656 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:21:02.193735 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:21:02.195190 systemd[1]: Starting systemd-boot-update.service... Jul 15 11:21:02.197341 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 15 11:21:02.199576 systemd[1]: Starting systemd-machine-id-commit.service... Jul 15 11:21:02.201551 systemd[1]: Starting systemd-sysext.service... Jul 15 11:21:02.202725 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1129 (bootctl) Jul 15 11:21:02.204088 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 15 11:21:02.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.206317 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 15 11:21:02.212212 systemd[1]: Unmounting usr-share-oem.mount... Jul 15 11:21:02.215665 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 15 11:21:02.215981 systemd[1]: Unmounted usr-share-oem.mount. Jul 15 11:21:02.228860 kernel: loop0: detected capacity change from 0 to 203944 Jul 15 11:21:02.275624 systemd[1]: Finished systemd-machine-id-commit.service. Jul 15 11:21:02.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.286855 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 11:21:02.291875 systemd-fsck[1141]: fsck.fat 4.2 (2021-01-31) Jul 15 11:21:02.291875 systemd-fsck[1141]: /dev/vda1: 236 files, 117310/258078 clusters Jul 15 11:21:02.292984 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 15 11:21:02.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.305870 kernel: loop1: detected capacity change from 0 to 203944 Jul 15 11:21:02.310019 (sd-sysext)[1147]: Using extensions 'kubernetes'. Jul 15 11:21:02.310367 (sd-sysext)[1147]: Merged extensions into '/usr'. Jul 15 11:21:02.326919 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:21:02.328222 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:21:02.330001 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:21:02.331696 systemd[1]: Starting modprobe@loop.service... Jul 15 11:21:02.332351 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:21:02.332485 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:21:02.333230 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:21:02.333371 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:21:02.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.334485 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:21:02.334614 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:21:02.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.335862 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:21:02.336010 systemd[1]: Finished modprobe@loop.service. Jul 15 11:21:02.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.337089 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:21:02.337191 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:21:02.377588 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 11:21:02.380577 systemd[1]: Finished ldconfig.service. Jul 15 11:21:02.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.560179 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 11:21:02.562040 systemd[1]: Mounting boot.mount... Jul 15 11:21:02.563811 systemd[1]: Mounting usr-share-oem.mount... Jul 15 11:21:02.570082 systemd[1]: Mounted boot.mount. Jul 15 11:21:02.570893 systemd[1]: Mounted usr-share-oem.mount. Jul 15 11:21:02.572622 systemd[1]: Finished systemd-sysext.service. Jul 15 11:21:02.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.574650 systemd[1]: Starting ensure-sysext.service... Jul 15 11:21:02.576473 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 15 11:21:02.577702 systemd[1]: Finished systemd-boot-update.service. Jul 15 11:21:02.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.581733 systemd[1]: Reloading. Jul 15 11:21:02.585709 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 15 11:21:02.586767 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 11:21:02.588085 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 11:21:02.619127 /usr/lib/systemd/system-generators/torcx-generator[1185]: time="2025-07-15T11:21:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:21:02.619605 /usr/lib/systemd/system-generators/torcx-generator[1185]: time="2025-07-15T11:21:02Z" level=info msg="torcx already run" Jul 15 11:21:02.682875 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:21:02.682900 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:21:02.699118 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:21:02.746604 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 15 11:21:02.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.750436 systemd[1]: Starting audit-rules.service... Jul 15 11:21:02.752239 systemd[1]: Starting clean-ca-certificates.service... Jul 15 11:21:02.754274 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 15 11:21:02.756684 systemd[1]: Starting systemd-resolved.service... Jul 15 11:21:02.758915 systemd[1]: Starting systemd-timesyncd.service... Jul 15 11:21:02.760608 systemd[1]: Starting systemd-update-utmp.service... Jul 15 11:21:02.762176 systemd[1]: Finished clean-ca-certificates.service. Jul 15 11:21:02.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.766000 audit[1241]: SYSTEM_BOOT pid=1241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.765628 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:21:02.769343 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:21:02.770606 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:21:02.772482 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:21:02.774461 systemd[1]: Starting modprobe@loop.service... Jul 15 11:21:02.775231 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:21:02.775450 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:21:02.775601 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:21:02.776772 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:21:02.776944 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:21:02.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.779255 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 15 11:21:02.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.780469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:21:02.780616 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:21:02.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.781779 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:21:02.781945 systemd[1]: Finished modprobe@loop.service. Jul 15 11:21:02.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.783055 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:21:02.783192 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:21:02.784648 systemd[1]: Starting systemd-update-done.service... Jul 15 11:21:02.785853 systemd[1]: Finished systemd-update-utmp.service. Jul 15 11:21:02.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.788829 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:21:02.789973 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:21:02.791756 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:21:02.793424 systemd[1]: Starting modprobe@loop.service... Jul 15 11:21:02.794049 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:21:02.794173 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:21:02.794261 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:21:02.795105 systemd[1]: Finished systemd-update-done.service. Jul 15 11:21:02.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.796196 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:21:02.796329 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:21:02.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.797540 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:21:02.797677 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:21:02.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.799075 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:21:02.799229 systemd[1]: Finished modprobe@loop.service. Jul 15 11:21:02.800534 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:21:02.800619 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:21:02.802963 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:21:02.804154 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:21:02.805855 systemd[1]: Starting modprobe@drm.service... Jul 15 11:21:02.807561 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:21:02.809283 systemd[1]: Starting modprobe@loop.service... Jul 15 11:21:02.809916 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:21:02.810040 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:21:02.811457 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 15 11:21:02.812216 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:21:02.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.813553 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:21:02.813707 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:21:02.814827 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:21:02.814966 systemd[1]: Finished modprobe@drm.service. Jul 15 11:21:02.816022 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:21:02.816148 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:21:02.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.817155 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:21:02.817314 systemd[1]: Finished modprobe@loop.service. Jul 15 11:21:02.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.818325 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:21:02.818427 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:21:02.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:02.819996 systemd[1]: Finished ensure-sysext.service. Jul 15 11:21:02.834000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 15 11:21:02.834000 audit[1283]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd69c7ab0 a2=420 a3=0 items=0 ppid=1232 pid=1283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:02.834000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 15 11:21:02.835289 augenrules[1283]: No rules Jul 15 11:21:02.836209 systemd[1]: Finished audit-rules.service. Jul 15 11:21:02.842073 systemd[1]: Started systemd-timesyncd.service. Jul 15 11:21:02.842753 systemd-timesyncd[1238]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 11:21:02.842815 systemd-timesyncd[1238]: Initial clock synchronization to Tue 2025-07-15 11:21:02.522828 UTC. Jul 15 11:21:02.843033 systemd[1]: Reached target time-set.target. Jul 15 11:21:02.854816 systemd-resolved[1237]: Positive Trust Anchors: Jul 15 11:21:02.854829 systemd-resolved[1237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:21:02.854867 systemd-resolved[1237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:21:02.864750 systemd-resolved[1237]: Defaulting to hostname 'linux'. Jul 15 11:21:02.868220 systemd[1]: Started systemd-resolved.service. Jul 15 11:21:02.868931 systemd[1]: Reached target network.target. Jul 15 11:21:02.869494 systemd[1]: Reached target nss-lookup.target. Jul 15 11:21:02.870074 systemd[1]: Reached target sysinit.target. Jul 15 11:21:02.870689 systemd[1]: Started motdgen.path. Jul 15 11:21:02.871229 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 15 11:21:02.872137 systemd[1]: Started logrotate.timer. Jul 15 11:21:02.872759 systemd[1]: Started mdadm.timer. Jul 15 11:21:02.873275 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 15 11:21:02.873883 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 11:21:02.873909 systemd[1]: Reached target paths.target. Jul 15 11:21:02.874424 systemd[1]: Reached target timers.target. Jul 15 11:21:02.875280 systemd[1]: Listening on dbus.socket. Jul 15 11:21:02.876997 systemd[1]: Starting docker.socket... Jul 15 11:21:02.878507 systemd[1]: Listening on sshd.socket. Jul 15 11:21:02.879204 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:21:02.879528 systemd[1]: Listening on docker.socket. Jul 15 11:21:02.880117 systemd[1]: Reached target sockets.target. Jul 15 11:21:02.880670 systemd[1]: Reached target basic.target. Jul 15 11:21:02.881396 systemd[1]: System is tainted: cgroupsv1 Jul 15 11:21:02.881442 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:21:02.881463 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:21:02.882452 systemd[1]: Starting containerd.service... Jul 15 11:21:02.884054 systemd[1]: Starting dbus.service... Jul 15 11:21:02.885524 systemd[1]: Starting enable-oem-cloudinit.service... Jul 15 11:21:02.887215 systemd[1]: Starting extend-filesystems.service... Jul 15 11:21:02.887911 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 15 11:21:02.889223 systemd[1]: Starting motdgen.service... Jul 15 11:21:02.890984 systemd[1]: Starting prepare-helm.service... Jul 15 11:21:02.893760 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 15 11:21:02.893866 jq[1295]: false Jul 15 11:21:02.895622 systemd[1]: Starting sshd-keygen.service... Jul 15 11:21:02.897906 systemd[1]: Starting systemd-logind.service... Jul 15 11:21:02.898800 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:21:02.898879 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 11:21:02.900034 systemd[1]: Starting update-engine.service... Jul 15 11:21:02.901680 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 15 11:21:02.904004 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 11:21:02.904459 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 15 11:21:02.908927 jq[1309]: true Jul 15 11:21:02.910365 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 11:21:02.910612 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 15 11:21:02.921372 extend-filesystems[1296]: Found loop1 Jul 15 11:21:02.921372 extend-filesystems[1296]: Found vda Jul 15 11:21:02.922725 extend-filesystems[1296]: Found vda1 Jul 15 11:21:02.922725 extend-filesystems[1296]: Found vda2 Jul 15 11:21:02.922725 extend-filesystems[1296]: Found vda3 Jul 15 11:21:02.922725 extend-filesystems[1296]: Found usr Jul 15 11:21:02.922725 extend-filesystems[1296]: Found vda4 Jul 15 11:21:02.922725 extend-filesystems[1296]: Found vda6 Jul 15 11:21:02.922725 extend-filesystems[1296]: Found vda7 Jul 15 11:21:02.922725 extend-filesystems[1296]: Found vda9 Jul 15 11:21:02.922725 extend-filesystems[1296]: Checking size of /dev/vda9 Jul 15 11:21:02.928605 jq[1321]: true Jul 15 11:21:02.945330 tar[1312]: linux-arm64/helm Jul 15 11:21:02.938738 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 11:21:02.945652 extend-filesystems[1296]: Resized partition /dev/vda9 Jul 15 11:21:02.939022 systemd[1]: Finished motdgen.service. Jul 15 11:21:02.955119 extend-filesystems[1336]: resize2fs 1.46.5 (30-Dec-2021) Jul 15 11:21:02.963885 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 11:21:02.969133 dbus-daemon[1293]: [system] SELinux support is enabled Jul 15 11:21:02.972138 systemd[1]: Started dbus.service. Jul 15 11:21:02.975201 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 11:21:02.975227 systemd[1]: Reached target system-config.target. Jul 15 11:21:02.975916 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 11:21:02.975935 systemd[1]: Reached target user-config.target. Jul 15 11:21:02.990065 update_engine[1308]: I0715 11:21:02.989809 1308 main.cc:92] Flatcar Update Engine starting Jul 15 11:21:02.992254 systemd[1]: Started update-engine.service. Jul 15 11:21:02.994397 systemd[1]: Started locksmithd.service. Jul 15 11:21:03.004049 update_engine[1308]: I0715 11:21:02.992262 1308 update_check_scheduler.cc:74] Next update check in 4m49s Jul 15 11:21:03.002054 systemd-logind[1305]: Watching system buttons on /dev/input/event0 (Power Button) Jul 15 11:21:03.005124 systemd-logind[1305]: New seat seat0. Jul 15 11:21:03.009823 systemd[1]: Started systemd-logind.service. Jul 15 11:21:03.011860 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 11:21:03.036171 extend-filesystems[1336]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 11:21:03.036171 extend-filesystems[1336]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 11:21:03.036171 extend-filesystems[1336]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 11:21:03.038977 extend-filesystems[1296]: Resized filesystem in /dev/vda9 Jul 15 11:21:03.038300 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 11:21:03.038550 systemd[1]: Finished extend-filesystems.service. Jul 15 11:21:03.041113 bash[1347]: Updated "/home/core/.ssh/authorized_keys" Jul 15 11:21:03.041606 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 15 11:21:03.048363 env[1322]: time="2025-07-15T11:21:03.047581445Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 15 11:21:03.067573 env[1322]: time="2025-07-15T11:21:03.067477331Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 15 11:21:03.067829 env[1322]: time="2025-07-15T11:21:03.067797324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:21:03.069072 env[1322]: time="2025-07-15T11:21:03.069042699Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.188-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:21:03.069172 env[1322]: time="2025-07-15T11:21:03.069156019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:21:03.069505 env[1322]: time="2025-07-15T11:21:03.069481312Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:21:03.069596 env[1322]: time="2025-07-15T11:21:03.069580309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 15 11:21:03.069654 env[1322]: time="2025-07-15T11:21:03.069638448Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 15 11:21:03.069705 env[1322]: time="2025-07-15T11:21:03.069692823Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 15 11:21:03.069880 env[1322]: time="2025-07-15T11:21:03.069864014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:21:03.070217 env[1322]: time="2025-07-15T11:21:03.070196257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:21:03.070460 env[1322]: time="2025-07-15T11:21:03.070434879Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:21:03.070544 env[1322]: time="2025-07-15T11:21:03.070529192Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 15 11:21:03.070670 env[1322]: time="2025-07-15T11:21:03.070651152Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 15 11:21:03.070734 env[1322]: time="2025-07-15T11:21:03.070720159Z" level=info msg="metadata content store policy set" policy=shared Jul 15 11:21:03.074929 env[1322]: time="2025-07-15T11:21:03.074120160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 15 11:21:03.074929 env[1322]: time="2025-07-15T11:21:03.074173269Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 15 11:21:03.074929 env[1322]: time="2025-07-15T11:21:03.074187861Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 15 11:21:03.074929 env[1322]: time="2025-07-15T11:21:03.074221769Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 15 11:21:03.074929 env[1322]: time="2025-07-15T11:21:03.074235862Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 15 11:21:03.074929 env[1322]: time="2025-07-15T11:21:03.074251069Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 15 11:21:03.074929 env[1322]: time="2025-07-15T11:21:03.074265200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 15 11:21:03.074929 env[1322]: time="2025-07-15T11:21:03.074628778Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 15 11:21:03.074929 env[1322]: time="2025-07-15T11:21:03.074649975Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 15 11:21:03.074929 env[1322]: time="2025-07-15T11:21:03.074664261Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 15 11:21:03.074929 env[1322]: time="2025-07-15T11:21:03.074676779Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 15 11:21:03.074929 env[1322]: time="2025-07-15T11:21:03.074689720Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 15 11:21:03.074929 env[1322]: time="2025-07-15T11:21:03.074822395Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 15 11:21:03.075317 env[1322]: time="2025-07-15T11:21:03.075294302Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 15 11:21:03.075756 env[1322]: time="2025-07-15T11:21:03.075732108Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 15 11:21:03.075921 env[1322]: time="2025-07-15T11:21:03.075899497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 15 11:21:03.076003 env[1322]: time="2025-07-15T11:21:03.075987320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 15 11:21:03.076200 env[1322]: time="2025-07-15T11:21:03.076181128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 15 11:21:03.076280 env[1322]: time="2025-07-15T11:21:03.076263344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 15 11:21:03.076343 env[1322]: time="2025-07-15T11:21:03.076327243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 15 11:21:03.076402 env[1322]: time="2025-07-15T11:21:03.076388608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 15 11:21:03.076512 env[1322]: time="2025-07-15T11:21:03.076463719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 15 11:21:03.076590 env[1322]: time="2025-07-15T11:21:03.076574352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 15 11:21:03.076647 env[1322]: time="2025-07-15T11:21:03.076634411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 15 11:21:03.076702 env[1322]: time="2025-07-15T11:21:03.076688556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 15 11:21:03.076769 env[1322]: time="2025-07-15T11:21:03.076756640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 15 11:21:03.076998 env[1322]: time="2025-07-15T11:21:03.076976907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 15 11:21:03.077008 locksmithd[1353]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 11:21:03.077301 env[1322]: time="2025-07-15T11:21:03.077281502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 15 11:21:03.077384 env[1322]: time="2025-07-15T11:21:03.077370015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 15 11:21:03.077442 env[1322]: time="2025-07-15T11:21:03.077428154Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 15 11:21:03.077510 env[1322]: time="2025-07-15T11:21:03.077494357Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 15 11:21:03.077566 env[1322]: time="2025-07-15T11:21:03.077553264Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 15 11:21:03.077627 env[1322]: time="2025-07-15T11:21:03.077613899Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 15 11:21:03.077705 env[1322]: time="2025-07-15T11:21:03.077691698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 15 11:21:03.078020 env[1322]: time="2025-07-15T11:21:03.077964881Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 15 11:21:03.080237 env[1322]: time="2025-07-15T11:21:03.078426420Z" level=info msg="Connect containerd service" Jul 15 11:21:03.080237 env[1322]: time="2025-07-15T11:21:03.078468661Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 15 11:21:03.080948 env[1322]: time="2025-07-15T11:21:03.080923121Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:21:03.081226 env[1322]: time="2025-07-15T11:21:03.081133519Z" level=info msg="Start subscribing containerd event" Jul 15 11:21:03.081226 env[1322]: time="2025-07-15T11:21:03.081192310Z" level=info msg="Start recovering state" Jul 15 11:21:03.081345 env[1322]: time="2025-07-15T11:21:03.081256555Z" level=info msg="Start event monitor" Jul 15 11:21:03.081345 env[1322]: time="2025-07-15T11:21:03.081273643Z" level=info msg="Start snapshots syncer" Jul 15 11:21:03.081345 env[1322]: time="2025-07-15T11:21:03.081283090Z" level=info msg="Start cni network conf syncer for default" Jul 15 11:21:03.081345 env[1322]: time="2025-07-15T11:21:03.081290923Z" level=info msg="Start streaming server" Jul 15 11:21:03.082008 env[1322]: time="2025-07-15T11:21:03.081969580Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 11:21:03.082068 env[1322]: time="2025-07-15T11:21:03.082036167Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 11:21:03.082188 systemd[1]: Started containerd.service. Jul 15 11:21:03.083308 env[1322]: time="2025-07-15T11:21:03.083283692Z" level=info msg="containerd successfully booted in 0.036562s" Jul 15 11:21:03.326935 tar[1312]: linux-arm64/LICENSE Jul 15 11:21:03.327033 tar[1312]: linux-arm64/README.md Jul 15 11:21:03.330926 systemd[1]: Finished prepare-helm.service. Jul 15 11:21:04.124972 systemd-networkd[1098]: eth0: Gained IPv6LL Jul 15 11:21:04.127001 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 15 11:21:04.127938 systemd[1]: Reached target network-online.target. Jul 15 11:21:04.129955 systemd[1]: Starting kubelet.service... Jul 15 11:21:04.243670 sshd_keygen[1323]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 11:21:04.262803 systemd[1]: Finished sshd-keygen.service. Jul 15 11:21:04.264912 systemd[1]: Starting issuegen.service... Jul 15 11:21:04.269621 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 11:21:04.269900 systemd[1]: Finished issuegen.service. Jul 15 11:21:04.271815 systemd[1]: Starting systemd-user-sessions.service... Jul 15 11:21:04.277525 systemd[1]: Finished systemd-user-sessions.service. Jul 15 11:21:04.279504 systemd[1]: Started getty@tty1.service. Jul 15 11:21:04.281408 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 15 11:21:04.282906 systemd[1]: Reached target getty.target. Jul 15 11:21:04.690917 systemd[1]: Started kubelet.service. Jul 15 11:21:04.691968 systemd[1]: Reached target multi-user.target. Jul 15 11:21:04.694102 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 15 11:21:04.700101 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 15 11:21:04.700322 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 15 11:21:04.701192 systemd[1]: Startup finished in 4.633s (kernel) + 5.062s (userspace) = 9.695s. Jul 15 11:21:05.108992 kubelet[1395]: E0715 11:21:05.108886 1395 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:21:05.110664 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:21:05.110803 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:21:08.143277 systemd[1]: Created slice system-sshd.slice. Jul 15 11:21:08.144485 systemd[1]: Started sshd@0-10.0.0.116:22-10.0.0.1:52062.service. Jul 15 11:21:08.187803 sshd[1405]: Accepted publickey for core from 10.0.0.1 port 52062 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:21:08.189603 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:21:08.197999 systemd-logind[1305]: New session 1 of user core. Jul 15 11:21:08.198742 systemd[1]: Created slice user-500.slice. Jul 15 11:21:08.199674 systemd[1]: Starting user-runtime-dir@500.service... Jul 15 11:21:08.208009 systemd[1]: Finished user-runtime-dir@500.service. Jul 15 11:21:08.209084 systemd[1]: Starting user@500.service... Jul 15 11:21:08.211797 (systemd)[1409]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:21:08.269522 systemd[1409]: Queued start job for default target default.target. Jul 15 11:21:08.269737 systemd[1409]: Reached target paths.target. Jul 15 11:21:08.269752 systemd[1409]: Reached target sockets.target. Jul 15 11:21:08.269763 systemd[1409]: Reached target timers.target. Jul 15 11:21:08.269772 systemd[1409]: Reached target basic.target. Jul 15 11:21:08.269816 systemd[1409]: Reached target default.target. Jul 15 11:21:08.269870 systemd[1409]: Startup finished in 53ms. Jul 15 11:21:08.269917 systemd[1]: Started user@500.service. Jul 15 11:21:08.270777 systemd[1]: Started session-1.scope. Jul 15 11:21:08.318684 systemd[1]: Started sshd@1-10.0.0.116:22-10.0.0.1:52074.service. Jul 15 11:21:08.360792 sshd[1419]: Accepted publickey for core from 10.0.0.1 port 52074 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:21:08.361983 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:21:08.365973 systemd[1]: Started session-2.scope. Jul 15 11:21:08.366150 systemd-logind[1305]: New session 2 of user core. Jul 15 11:21:08.420568 sshd[1419]: pam_unix(sshd:session): session closed for user core Jul 15 11:21:08.422562 systemd[1]: Started sshd@2-10.0.0.116:22-10.0.0.1:52084.service. Jul 15 11:21:08.423435 systemd[1]: sshd@1-10.0.0.116:22-10.0.0.1:52074.service: Deactivated successfully. Jul 15 11:21:08.424190 systemd-logind[1305]: Session 2 logged out. Waiting for processes to exit. Jul 15 11:21:08.424245 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 11:21:08.425248 systemd-logind[1305]: Removed session 2. Jul 15 11:21:08.459371 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 52084 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:21:08.460477 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:21:08.463369 systemd-logind[1305]: New session 3 of user core. Jul 15 11:21:08.464076 systemd[1]: Started session-3.scope. Jul 15 11:21:08.512936 sshd[1424]: pam_unix(sshd:session): session closed for user core Jul 15 11:21:08.514661 systemd[1]: Started sshd@3-10.0.0.116:22-10.0.0.1:52098.service. Jul 15 11:21:08.515665 systemd[1]: sshd@2-10.0.0.116:22-10.0.0.1:52084.service: Deactivated successfully. Jul 15 11:21:08.516612 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 11:21:08.516987 systemd-logind[1305]: Session 3 logged out. Waiting for processes to exit. Jul 15 11:21:08.517711 systemd-logind[1305]: Removed session 3. Jul 15 11:21:08.551012 sshd[1431]: Accepted publickey for core from 10.0.0.1 port 52098 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:21:08.552449 sshd[1431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:21:08.555972 systemd[1]: Started session-4.scope. Jul 15 11:21:08.556001 systemd-logind[1305]: New session 4 of user core. Jul 15 11:21:08.609793 sshd[1431]: pam_unix(sshd:session): session closed for user core Jul 15 11:21:08.611885 systemd[1]: Started sshd@4-10.0.0.116:22-10.0.0.1:52104.service. Jul 15 11:21:08.612281 systemd[1]: sshd@3-10.0.0.116:22-10.0.0.1:52098.service: Deactivated successfully. Jul 15 11:21:08.613142 systemd-logind[1305]: Session 4 logged out. Waiting for processes to exit. Jul 15 11:21:08.613191 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 11:21:08.613888 systemd-logind[1305]: Removed session 4. Jul 15 11:21:08.648626 sshd[1438]: Accepted publickey for core from 10.0.0.1 port 52104 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:21:08.649737 sshd[1438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:21:08.654641 systemd-logind[1305]: New session 5 of user core. Jul 15 11:21:08.655008 systemd[1]: Started session-5.scope. Jul 15 11:21:08.718240 sudo[1444]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 11:21:08.718768 sudo[1444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:21:08.731579 dbus-daemon[1293]: avc: received setenforce notice (enforcing=1) Jul 15 11:21:08.732460 sudo[1444]: pam_unix(sudo:session): session closed for user root Jul 15 11:21:08.734351 sshd[1438]: pam_unix(sshd:session): session closed for user core Jul 15 11:21:08.737585 systemd[1]: Started sshd@5-10.0.0.116:22-10.0.0.1:52108.service. Jul 15 11:21:08.738538 systemd[1]: sshd@4-10.0.0.116:22-10.0.0.1:52104.service: Deactivated successfully. Jul 15 11:21:08.738544 systemd-logind[1305]: Session 5 logged out. Waiting for processes to exit. Jul 15 11:21:08.739194 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 11:21:08.740109 systemd-logind[1305]: Removed session 5. Jul 15 11:21:08.775206 sshd[1446]: Accepted publickey for core from 10.0.0.1 port 52108 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:21:08.776441 sshd[1446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:21:08.779657 systemd-logind[1305]: New session 6 of user core. Jul 15 11:21:08.780368 systemd[1]: Started session-6.scope. Jul 15 11:21:08.830740 sudo[1453]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 11:21:08.831065 sudo[1453]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:21:08.833631 sudo[1453]: pam_unix(sudo:session): session closed for user root Jul 15 11:21:08.837720 sudo[1452]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 15 11:21:08.837968 sudo[1452]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:21:08.846242 systemd[1]: Stopping audit-rules.service... Jul 15 11:21:08.846000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 15 11:21:08.847867 auditctl[1456]: No rules Jul 15 11:21:08.848098 kernel: kauditd_printk_skb: 68 callbacks suppressed Jul 15 11:21:08.848141 kernel: audit: type=1305 audit(1752578468.846:156): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 15 11:21:08.846000 audit[1456]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc7015ee0 a2=420 a3=0 items=0 ppid=1 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:08.848375 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 11:21:08.848588 systemd[1]: Stopped audit-rules.service. Jul 15 11:21:08.850076 systemd[1]: Starting audit-rules.service... Jul 15 11:21:08.851741 kernel: audit: type=1300 audit(1752578468.846:156): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc7015ee0 a2=420 a3=0 items=0 ppid=1 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:08.851785 kernel: audit: type=1327 audit(1752578468.846:156): proctitle=2F7362696E2F617564697463746C002D44 Jul 15 11:21:08.846000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 15 11:21:08.852385 kernel: audit: type=1131 audit(1752578468.847:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:08.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:08.865549 augenrules[1474]: No rules Jul 15 11:21:08.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:08.866167 systemd[1]: Finished audit-rules.service. Jul 15 11:21:08.867000 audit[1452]: USER_END pid=1452 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:21:08.867951 sudo[1452]: pam_unix(sudo:session): session closed for user root Jul 15 11:21:08.870499 kernel: audit: type=1130 audit(1752578468.865:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:08.870567 kernel: audit: type=1106 audit(1752578468.867:159): pid=1452 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:21:08.870585 kernel: audit: type=1104 audit(1752578468.867:160): pid=1452 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:21:08.867000 audit[1452]: CRED_DISP pid=1452 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:21:08.872748 sshd[1446]: pam_unix(sshd:session): session closed for user core Jul 15 11:21:08.872000 audit[1446]: USER_END pid=1446 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:21:08.874936 systemd[1]: Started sshd@6-10.0.0.116:22-10.0.0.1:52124.service. Jul 15 11:21:08.875357 systemd[1]: sshd@5-10.0.0.116:22-10.0.0.1:52108.service: Deactivated successfully. Jul 15 11:21:08.872000 audit[1446]: CRED_DISP pid=1446 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:21:08.876336 systemd-logind[1305]: Session 6 logged out. Waiting for processes to exit. Jul 15 11:21:08.876393 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 11:21:08.877400 systemd-logind[1305]: Removed session 6. Jul 15 11:21:08.878294 kernel: audit: type=1106 audit(1752578468.872:161): pid=1446 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:21:08.878340 kernel: audit: type=1104 audit(1752578468.872:162): pid=1446 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:21:08.878367 kernel: audit: type=1130 audit(1752578468.874:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.116:22-10.0.0.1:52124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:08.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.116:22-10.0.0.1:52124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:08.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.116:22-10.0.0.1:52108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:08.911000 audit[1479]: USER_ACCT pid=1479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:21:08.913887 sshd[1479]: Accepted publickey for core from 10.0.0.1 port 52124 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:21:08.915443 sshd[1479]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:21:08.914000 audit[1479]: CRED_ACQ pid=1479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:21:08.914000 audit[1479]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe70c8070 a2=3 a3=1 items=0 ppid=1 pid=1479 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:08.914000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:21:08.919017 systemd-logind[1305]: New session 7 of user core. Jul 15 11:21:08.919672 systemd[1]: Started session-7.scope. Jul 15 11:21:08.922000 audit[1479]: USER_START pid=1479 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:21:08.924000 audit[1484]: CRED_ACQ pid=1484 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:21:08.970000 audit[1485]: USER_ACCT pid=1485 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:21:08.970000 audit[1485]: CRED_REFR pid=1485 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:21:08.971341 sudo[1485]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 11:21:08.971563 sudo[1485]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:21:08.972000 audit[1485]: USER_START pid=1485 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:21:09.029003 systemd[1]: Starting docker.service... Jul 15 11:21:09.111598 env[1496]: time="2025-07-15T11:21:09.111535404Z" level=info msg="Starting up" Jul 15 11:21:09.113256 env[1496]: time="2025-07-15T11:21:09.113224110Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:21:09.113256 env[1496]: time="2025-07-15T11:21:09.113250586Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:21:09.113342 env[1496]: time="2025-07-15T11:21:09.113273291Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:21:09.113342 env[1496]: time="2025-07-15T11:21:09.113285154Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:21:09.119178 env[1496]: time="2025-07-15T11:21:09.119148621Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:21:09.119178 env[1496]: time="2025-07-15T11:21:09.119171955Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:21:09.119298 env[1496]: time="2025-07-15T11:21:09.119186332Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:21:09.119298 env[1496]: time="2025-07-15T11:21:09.119195249Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:21:09.124187 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2813303259-merged.mount: Deactivated successfully. Jul 15 11:21:09.311687 env[1496]: time="2025-07-15T11:21:09.311598273Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 15 11:21:09.311889 env[1496]: time="2025-07-15T11:21:09.311872110Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 15 11:21:09.312278 env[1496]: time="2025-07-15T11:21:09.312259865Z" level=info msg="Loading containers: start." Jul 15 11:21:09.369000 audit[1533]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.369000 audit[1533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffffc31b9f0 a2=0 a3=1 items=0 ppid=1496 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.369000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 15 11:21:09.371000 audit[1535]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.371000 audit[1535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd99f4a70 a2=0 a3=1 items=0 ppid=1496 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.371000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 15 11:21:09.373000 audit[1537]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.373000 audit[1537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff3080ce0 a2=0 a3=1 items=0 ppid=1496 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.373000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 15 11:21:09.375000 audit[1539]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.375000 audit[1539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffefd63080 a2=0 a3=1 items=0 ppid=1496 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.375000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 15 11:21:09.378000 audit[1541]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.378000 audit[1541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffebcdcb80 a2=0 a3=1 items=0 ppid=1496 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.378000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 15 11:21:09.407000 audit[1546]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.407000 audit[1546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd33bd670 a2=0 a3=1 items=0 ppid=1496 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.407000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 15 11:21:09.414000 audit[1548]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.414000 audit[1548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc66e1180 a2=0 a3=1 items=0 ppid=1496 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.414000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 15 11:21:09.416000 audit[1550]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.416000 audit[1550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffcbd46220 a2=0 a3=1 items=0 ppid=1496 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.416000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 15 11:21:09.417000 audit[1552]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.417000 audit[1552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffc4433790 a2=0 a3=1 items=0 ppid=1496 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.417000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 15 11:21:09.424000 audit[1556]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.424000 audit[1556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc9e41660 a2=0 a3=1 items=0 ppid=1496 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.424000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 15 11:21:09.433000 audit[1557]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1557 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.433000 audit[1557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd771c0e0 a2=0 a3=1 items=0 ppid=1496 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.433000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 15 11:21:09.445130 kernel: Initializing XFRM netlink socket Jul 15 11:21:09.467349 env[1496]: time="2025-07-15T11:21:09.467306990Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 15 11:21:09.481000 audit[1565]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.481000 audit[1565]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffc543d010 a2=0 a3=1 items=0 ppid=1496 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.481000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 15 11:21:09.495000 audit[1568]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.495000 audit[1568]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc236f7d0 a2=0 a3=1 items=0 ppid=1496 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.495000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 15 11:21:09.497000 audit[1571]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.497000 audit[1571]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffc7f7de40 a2=0 a3=1 items=0 ppid=1496 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.497000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 15 11:21:09.499000 audit[1573]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.499000 audit[1573]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffffb15a720 a2=0 a3=1 items=0 ppid=1496 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.499000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 15 11:21:09.502000 audit[1575]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.502000 audit[1575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffe0f09340 a2=0 a3=1 items=0 ppid=1496 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.502000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 15 11:21:09.504000 audit[1577]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.504000 audit[1577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffd9e7e120 a2=0 a3=1 items=0 ppid=1496 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.504000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 15 11:21:09.506000 audit[1579]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.506000 audit[1579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffc1805ab0 a2=0 a3=1 items=0 ppid=1496 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.506000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 15 11:21:09.514000 audit[1582]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.514000 audit[1582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffcb737880 a2=0 a3=1 items=0 ppid=1496 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.514000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 15 11:21:09.515000 audit[1584]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.515000 audit[1584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffc5988d90 a2=0 a3=1 items=0 ppid=1496 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.515000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 15 11:21:09.517000 audit[1586]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.517000 audit[1586]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc06e1a90 a2=0 a3=1 items=0 ppid=1496 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.517000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 15 11:21:09.519000 audit[1588]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.519000 audit[1588]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc9388ea0 a2=0 a3=1 items=0 ppid=1496 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.519000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 15 11:21:09.520546 systemd-networkd[1098]: docker0: Link UP Jul 15 11:21:09.526000 audit[1592]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.526000 audit[1592]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd76c5460 a2=0 a3=1 items=0 ppid=1496 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.526000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 15 11:21:09.536000 audit[1593]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:09.536000 audit[1593]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffca92bcf0 a2=0 a3=1 items=0 ppid=1496 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:09.536000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 15 11:21:09.538031 env[1496]: time="2025-07-15T11:21:09.537997188Z" level=info msg="Loading containers: done." Jul 15 11:21:09.554216 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1516457790-merged.mount: Deactivated successfully. Jul 15 11:21:09.558000 env[1496]: time="2025-07-15T11:21:09.557961179Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 11:21:09.558135 env[1496]: time="2025-07-15T11:21:09.558117208Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 15 11:21:09.558233 env[1496]: time="2025-07-15T11:21:09.558217221Z" level=info msg="Daemon has completed initialization" Jul 15 11:21:09.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:09.571098 systemd[1]: Started docker.service. Jul 15 11:21:09.575139 env[1496]: time="2025-07-15T11:21:09.575094334Z" level=info msg="API listen on /run/docker.sock" Jul 15 11:21:10.163794 env[1322]: time="2025-07-15T11:21:10.163755707Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 15 11:21:10.828295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount923775421.mount: Deactivated successfully. Jul 15 11:21:11.923883 env[1322]: time="2025-07-15T11:21:11.923826978Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:11.925272 env[1322]: time="2025-07-15T11:21:11.925219659Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:11.926942 env[1322]: time="2025-07-15T11:21:11.926908495Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:11.929076 env[1322]: time="2025-07-15T11:21:11.929039057Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:11.929935 env[1322]: time="2025-07-15T11:21:11.929903181Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 15 11:21:11.933516 env[1322]: time="2025-07-15T11:21:11.933472581Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 15 11:21:13.251709 env[1322]: time="2025-07-15T11:21:13.251657883Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:13.255697 env[1322]: time="2025-07-15T11:21:13.254069290Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:13.256856 env[1322]: time="2025-07-15T11:21:13.256768161Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:13.258032 env[1322]: time="2025-07-15T11:21:13.257999828Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:13.261203 env[1322]: time="2025-07-15T11:21:13.261129441Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 15 11:21:13.261759 env[1322]: time="2025-07-15T11:21:13.261733856Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 15 11:21:14.432328 env[1322]: time="2025-07-15T11:21:14.432283868Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:14.433621 env[1322]: time="2025-07-15T11:21:14.433592829Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:14.436117 env[1322]: time="2025-07-15T11:21:14.436078183Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:14.438125 env[1322]: time="2025-07-15T11:21:14.438096793Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:14.438922 env[1322]: time="2025-07-15T11:21:14.438880235Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 15 11:21:14.439420 env[1322]: time="2025-07-15T11:21:14.439395212Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 15 11:21:15.361597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 11:21:15.363077 kernel: kauditd_printk_skb: 84 callbacks suppressed Jul 15 11:21:15.363129 kernel: audit: type=1130 audit(1752578475.360:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:15.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:15.361773 systemd[1]: Stopped kubelet.service. Jul 15 11:21:15.363349 systemd[1]: Starting kubelet.service... Jul 15 11:21:15.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:15.366038 kernel: audit: type=1131 audit(1752578475.360:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:15.411051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3066192465.mount: Deactivated successfully. Jul 15 11:21:15.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:15.458460 systemd[1]: Started kubelet.service. Jul 15 11:21:15.460920 kernel: audit: type=1130 audit(1752578475.457:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:15.497960 kubelet[1638]: E0715 11:21:15.497913 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:21:15.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 15 11:21:15.500413 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:21:15.500550 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:21:15.503862 kernel: audit: type=1131 audit(1752578475.500:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 15 11:21:16.055722 env[1322]: time="2025-07-15T11:21:16.055677178Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:16.058435 env[1322]: time="2025-07-15T11:21:16.058409032Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:16.060423 env[1322]: time="2025-07-15T11:21:16.060399387Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:16.062251 env[1322]: time="2025-07-15T11:21:16.062225628Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:16.063264 env[1322]: time="2025-07-15T11:21:16.062826444Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 15 11:21:16.065260 env[1322]: time="2025-07-15T11:21:16.065236462Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 11:21:16.671228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2577069322.mount: Deactivated successfully. Jul 15 11:21:17.637091 env[1322]: time="2025-07-15T11:21:17.637033860Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:17.638426 env[1322]: time="2025-07-15T11:21:17.638394936Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:17.640156 env[1322]: time="2025-07-15T11:21:17.640121703Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:17.645271 env[1322]: time="2025-07-15T11:21:17.645222459Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:17.645577 env[1322]: time="2025-07-15T11:21:17.645551896Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 15 11:21:17.646101 env[1322]: time="2025-07-15T11:21:17.646027823Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 11:21:18.106054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount479316656.mount: Deactivated successfully. Jul 15 11:21:18.109290 env[1322]: time="2025-07-15T11:21:18.109247267Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:18.111039 env[1322]: time="2025-07-15T11:21:18.110998169Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:18.112207 env[1322]: time="2025-07-15T11:21:18.112178925Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:18.114126 env[1322]: time="2025-07-15T11:21:18.114098791Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:18.114609 env[1322]: time="2025-07-15T11:21:18.114580538Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 11:21:18.115015 env[1322]: time="2025-07-15T11:21:18.114980966Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 11:21:18.581884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount445910857.mount: Deactivated successfully. Jul 15 11:21:20.612470 env[1322]: time="2025-07-15T11:21:20.612409474Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:20.614226 env[1322]: time="2025-07-15T11:21:20.614196423Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:20.616011 env[1322]: time="2025-07-15T11:21:20.615984527Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:20.618711 env[1322]: time="2025-07-15T11:21:20.618676960Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:20.619571 env[1322]: time="2025-07-15T11:21:20.619547071Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 15 11:21:25.485948 systemd[1]: Stopped kubelet.service. Jul 15 11:21:25.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:25.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:25.487944 systemd[1]: Starting kubelet.service... Jul 15 11:21:25.489548 kernel: audit: type=1130 audit(1752578485.484:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:25.489626 kernel: audit: type=1131 audit(1752578485.484:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:25.508781 systemd[1]: Reloading. Jul 15 11:21:25.561577 /usr/lib/systemd/system-generators/torcx-generator[1695]: time="2025-07-15T11:21:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:21:25.561607 /usr/lib/systemd/system-generators/torcx-generator[1695]: time="2025-07-15T11:21:25Z" level=info msg="torcx already run" Jul 15 11:21:25.625710 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:21:25.625905 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:21:25.641240 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:21:25.700796 systemd[1]: Started kubelet.service. Jul 15 11:21:25.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:25.703860 kernel: audit: type=1130 audit(1752578485.700:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:25.704532 systemd[1]: Stopping kubelet.service... Jul 15 11:21:25.705026 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 11:21:25.705358 systemd[1]: Stopped kubelet.service. Jul 15 11:21:25.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:25.707050 systemd[1]: Starting kubelet.service... Jul 15 11:21:25.707895 kernel: audit: type=1131 audit(1752578485.704:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:25.797914 systemd[1]: Started kubelet.service. Jul 15 11:21:25.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:25.801869 kernel: audit: type=1130 audit(1752578485.797:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:25.836421 kubelet[1753]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:21:25.836421 kubelet[1753]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 11:21:25.836421 kubelet[1753]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:21:25.836739 kubelet[1753]: I0715 11:21:25.836472 1753 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:21:26.709033 kubelet[1753]: I0715 11:21:26.708984 1753 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 11:21:26.709033 kubelet[1753]: I0715 11:21:26.709020 1753 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:21:26.709278 kubelet[1753]: I0715 11:21:26.709245 1753 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 11:21:26.791869 kubelet[1753]: E0715 11:21:26.791823 1753 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:21:26.793697 kubelet[1753]: I0715 11:21:26.793671 1753 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:21:26.801643 kubelet[1753]: E0715 11:21:26.801601 1753 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:21:26.801643 kubelet[1753]: I0715 11:21:26.801637 1753 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:21:26.805127 kubelet[1753]: I0715 11:21:26.805105 1753 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:21:26.805458 kubelet[1753]: I0715 11:21:26.805446 1753 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 11:21:26.805571 kubelet[1753]: I0715 11:21:26.805549 1753 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:21:26.805733 kubelet[1753]: I0715 11:21:26.805574 1753 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 15 11:21:26.805819 kubelet[1753]: I0715 11:21:26.805806 1753 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:21:26.805819 kubelet[1753]: I0715 11:21:26.805818 1753 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 11:21:26.806080 kubelet[1753]: I0715 11:21:26.806056 1753 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:21:26.811895 kubelet[1753]: I0715 11:21:26.811872 1753 kubelet.go:408] "Attempting to sync node with API server" Jul 15 11:21:26.811895 kubelet[1753]: I0715 11:21:26.811896 1753 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:21:26.811978 kubelet[1753]: I0715 11:21:26.811921 1753 kubelet.go:314] "Adding apiserver pod source" Jul 15 11:21:26.812006 kubelet[1753]: I0715 11:21:26.811998 1753 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:21:26.828991 kubelet[1753]: W0715 11:21:26.828880 1753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jul 15 11:21:26.829112 kubelet[1753]: E0715 11:21:26.829094 1753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:21:26.836017 kubelet[1753]: W0715 11:21:26.835941 1753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jul 15 11:21:26.836017 kubelet[1753]: E0715 11:21:26.835989 1753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:21:26.836134 kubelet[1753]: I0715 11:21:26.836117 1753 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:21:26.836996 kubelet[1753]: I0715 11:21:26.836978 1753 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 11:21:26.837534 kubelet[1753]: W0715 11:21:26.837520 1753 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 11:21:26.838527 kubelet[1753]: I0715 11:21:26.838508 1753 server.go:1274] "Started kubelet" Jul 15 11:21:26.839269 kubelet[1753]: I0715 11:21:26.839223 1753 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:21:26.839596 kubelet[1753]: I0715 11:21:26.839546 1753 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:21:26.840030 kubelet[1753]: I0715 11:21:26.840011 1753 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:21:26.843287 kubelet[1753]: I0715 11:21:26.840391 1753 server.go:449] "Adding debug handlers to kubelet server" Jul 15 11:21:26.843000 audit[1753]: AVC avc: denied { mac_admin } for pid=1753 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:21:26.846563 kubelet[1753]: I0715 11:21:26.845000 1753 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 15 11:21:26.846563 kubelet[1753]: I0715 11:21:26.845037 1753 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 15 11:21:26.846563 kubelet[1753]: I0715 11:21:26.845090 1753 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:21:26.843000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:21:26.846991 kubelet[1753]: I0715 11:21:26.846970 1753 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:21:26.847507 kubelet[1753]: E0715 11:21:26.846302 1753 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.116:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.116:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185268d924ffcdf6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 11:21:26.838480374 +0000 UTC m=+1.036855271,LastTimestamp:2025-07-15 11:21:26.838480374 +0000 UTC m=+1.036855271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 11:21:26.847774 kernel: audit: type=1400 audit(1752578486.843:207): avc: denied { mac_admin } for pid=1753 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:21:26.847825 kernel: audit: type=1401 audit(1752578486.843:207): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:21:26.847854 kernel: audit: type=1300 audit(1752578486.843:207): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000ca6420 a1=4000a05fc8 a2=4000ca63f0 a3=25 items=0 ppid=1 pid=1753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:26.843000 audit[1753]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000ca6420 a1=4000a05fc8 a2=4000ca63f0 a3=25 items=0 ppid=1 pid=1753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:26.848613 kubelet[1753]: I0715 11:21:26.848591 1753 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 11:21:26.848835 kubelet[1753]: I0715 11:21:26.848818 1753 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 11:21:26.848975 kubelet[1753]: I0715 11:21:26.848963 1753 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:21:26.849256 kubelet[1753]: I0715 11:21:26.849220 1753 factory.go:221] Registration of the systemd container factory successfully Jul 15 11:21:26.849313 kubelet[1753]: I0715 11:21:26.849302 1753 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:21:26.849487 kubelet[1753]: W0715 11:21:26.849453 1753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jul 15 11:21:26.849613 kubelet[1753]: E0715 11:21:26.849595 1753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:21:26.851733 kernel: audit: type=1327 audit(1752578486.843:207): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:21:26.843000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:21:26.851937 kubelet[1753]: E0715 11:21:26.851914 1753 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:21:26.852102 kubelet[1753]: E0715 11:21:26.852079 1753 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="200ms" Jul 15 11:21:26.852244 kubelet[1753]: E0715 11:21:26.852227 1753 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:21:26.843000 audit[1753]: AVC avc: denied { mac_admin } for pid=1753 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:21:26.853266 kubelet[1753]: I0715 11:21:26.853249 1753 factory.go:221] Registration of the containerd container factory successfully Jul 15 11:21:26.854588 kernel: audit: type=1400 audit(1752578486.843:208): avc: denied { mac_admin } for pid=1753 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:21:26.843000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:21:26.843000 audit[1753]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a19bc0 a1=4000a05fe0 a2=4000ca64b0 a3=25 items=0 ppid=1 pid=1753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:26.843000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:21:26.847000 audit[1766]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1766 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:26.847000 audit[1766]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe6247c30 a2=0 a3=1 items=0 ppid=1753 pid=1766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:26.847000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 15 11:21:26.848000 audit[1767]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1767 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:26.848000 audit[1767]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd883e8e0 a2=0 a3=1 items=0 ppid=1753 pid=1767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:26.848000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 15 11:21:26.849000 audit[1769]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1769 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:26.849000 audit[1769]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd6df7e70 a2=0 a3=1 items=0 ppid=1753 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:26.849000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 15 11:21:26.856000 audit[1772]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1772 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:26.856000 audit[1772]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc0f035e0 a2=0 a3=1 items=0 ppid=1753 pid=1772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:26.856000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 15 11:21:26.864000 audit[1777]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1777 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:26.864000 audit[1777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff9997950 a2=0 a3=1 items=0 ppid=1753 pid=1777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:26.864000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 15 11:21:26.865998 kubelet[1753]: I0715 11:21:26.865960 1753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 11:21:26.866000 audit[1780]: NETFILTER_CFG table=mangle:31 family=2 entries=1 op=nft_register_chain pid=1780 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:26.866000 audit[1780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff5448a70 a2=0 a3=1 items=0 ppid=1753 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:26.866000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 15 11:21:26.866000 audit[1779]: NETFILTER_CFG table=mangle:32 family=10 entries=2 op=nft_register_chain pid=1779 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:26.866000 audit[1779]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd547a0c0 a2=0 a3=1 items=0 ppid=1753 pid=1779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:26.866000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 15 11:21:26.867397 kubelet[1753]: I0715 11:21:26.867372 1753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 11:21:26.867442 kubelet[1753]: I0715 11:21:26.867406 1753 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 11:21:26.867442 kubelet[1753]: I0715 11:21:26.867423 1753 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 11:21:26.867487 kubelet[1753]: E0715 11:21:26.867465 1753 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:21:26.868097 kubelet[1753]: W0715 11:21:26.868062 1753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jul 15 11:21:26.868151 kubelet[1753]: E0715 11:21:26.868100 1753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:21:26.867000 audit[1782]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1782 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:26.867000 audit[1782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd277eaf0 a2=0 a3=1 items=0 ppid=1753 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:26.867000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 15 11:21:26.868000 audit[1781]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1781 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:26.868000 audit[1781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2434870 a2=0 a3=1 items=0 ppid=1753 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:26.868000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 15 11:21:26.868000 audit[1783]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1783 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:26.868000 audit[1783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffde8dfe40 a2=0 a3=1 items=0 ppid=1753 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:26.868000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 15 11:21:26.869000 audit[1785]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1785 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:26.869000 audit[1785]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdf0dfb10 a2=0 a3=1 items=0 ppid=1753 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:26.869000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 15 11:21:26.870000 audit[1787]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1787 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:26.870000 audit[1787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe76809b0 a2=0 a3=1 items=0 ppid=1753 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:26.871651 kubelet[1753]: I0715 11:21:26.871622 1753 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 11:21:26.871651 kubelet[1753]: I0715 11:21:26.871650 1753 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 11:21:26.871719 kubelet[1753]: I0715 11:21:26.871667 1753 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:21:26.870000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 15 11:21:26.952255 kubelet[1753]: E0715 11:21:26.952219 1753 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:21:26.968543 kubelet[1753]: E0715 11:21:26.968456 1753 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 11:21:26.982712 kubelet[1753]: I0715 11:21:26.982679 1753 policy_none.go:49] "None policy: Start" Jul 15 11:21:26.983551 kubelet[1753]: I0715 11:21:26.983532 1753 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 11:21:26.983597 kubelet[1753]: I0715 11:21:26.983561 1753 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:21:26.988286 kubelet[1753]: I0715 11:21:26.988261 1753 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 11:21:26.986000 audit[1753]: AVC avc: denied { mac_admin } for pid=1753 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:21:26.986000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:21:26.986000 audit[1753]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000d50d80 a1=400086dad0 a2=4000d50d50 a3=25 items=0 ppid=1 pid=1753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:26.986000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:21:26.988493 kubelet[1753]: I0715 11:21:26.988331 1753 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 15 11:21:26.988493 kubelet[1753]: I0715 11:21:26.988427 1753 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:21:26.988493 kubelet[1753]: I0715 11:21:26.988437 1753 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:21:26.989639 kubelet[1753]: I0715 11:21:26.989610 1753 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:21:26.990541 kubelet[1753]: E0715 11:21:26.990441 1753 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 11:21:27.053206 kubelet[1753]: E0715 11:21:27.053154 1753 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="400ms" Jul 15 11:21:27.090102 kubelet[1753]: I0715 11:21:27.090070 1753 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:21:27.092371 kubelet[1753]: E0715 11:21:27.092342 1753 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jul 15 11:21:27.251344 kubelet[1753]: I0715 11:21:27.251242 1753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:21:27.251344 kubelet[1753]: I0715 11:21:27.251284 1753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ba523266aa451dbbff595c1e7bcd6f5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8ba523266aa451dbbff595c1e7bcd6f5\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:21:27.251344 kubelet[1753]: I0715 11:21:27.251316 1753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:21:27.251344 kubelet[1753]: I0715 11:21:27.251336 1753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:21:27.252046 kubelet[1753]: I0715 11:21:27.251353 1753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:21:27.252046 kubelet[1753]: I0715 11:21:27.251368 1753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ba523266aa451dbbff595c1e7bcd6f5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8ba523266aa451dbbff595c1e7bcd6f5\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:21:27.252046 kubelet[1753]: I0715 11:21:27.251396 1753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:21:27.252046 kubelet[1753]: I0715 11:21:27.251410 1753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:21:27.252046 kubelet[1753]: I0715 11:21:27.251425 1753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ba523266aa451dbbff595c1e7bcd6f5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8ba523266aa451dbbff595c1e7bcd6f5\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:21:27.294518 kubelet[1753]: I0715 11:21:27.294470 1753 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:21:27.294902 kubelet[1753]: E0715 11:21:27.294868 1753 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jul 15 11:21:27.414406 kubelet[1753]: E0715 11:21:27.414293 1753 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.116:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.116:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185268d924ffcdf6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 11:21:26.838480374 +0000 UTC m=+1.036855271,LastTimestamp:2025-07-15 11:21:26.838480374 +0000 UTC m=+1.036855271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 11:21:27.454090 kubelet[1753]: E0715 11:21:27.454059 1753 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="800ms" Jul 15 11:21:27.474511 kubelet[1753]: E0715 11:21:27.474468 1753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:27.475524 kubelet[1753]: E0715 11:21:27.475496 1753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:27.480550 kubelet[1753]: E0715 11:21:27.476565 1753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:27.485074 env[1322]: time="2025-07-15T11:21:27.483433886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8ba523266aa451dbbff595c1e7bcd6f5,Namespace:kube-system,Attempt:0,}" Jul 15 11:21:27.485574 env[1322]: time="2025-07-15T11:21:27.485533635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 15 11:21:27.485712 env[1322]: time="2025-07-15T11:21:27.485672809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 15 11:21:27.644237 kubelet[1753]: W0715 11:21:27.644076 1753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jul 15 11:21:27.644237 kubelet[1753]: E0715 11:21:27.644143 1753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:21:27.696681 kubelet[1753]: I0715 11:21:27.696650 1753 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:21:27.697033 kubelet[1753]: E0715 11:21:27.696998 1753 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jul 15 11:21:27.774896 kubelet[1753]: W0715 11:21:27.774749 1753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jul 15 11:21:27.774896 kubelet[1753]: E0715 11:21:27.774893 1753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:21:28.073785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474820897.mount: Deactivated successfully. Jul 15 11:21:28.079311 env[1322]: time="2025-07-15T11:21:28.079273738Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:28.080197 env[1322]: time="2025-07-15T11:21:28.080172301Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:28.081044 env[1322]: time="2025-07-15T11:21:28.081001682Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:28.083123 env[1322]: time="2025-07-15T11:21:28.083093629Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:28.084505 env[1322]: time="2025-07-15T11:21:28.084470512Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:28.085203 env[1322]: time="2025-07-15T11:21:28.085179744Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:28.087780 env[1322]: time="2025-07-15T11:21:28.087743939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:28.089786 env[1322]: time="2025-07-15T11:21:28.089759195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:28.092266 env[1322]: time="2025-07-15T11:21:28.092242745Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:28.093058 env[1322]: time="2025-07-15T11:21:28.093033901Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:28.093922 env[1322]: time="2025-07-15T11:21:28.093898192Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:28.094543 env[1322]: time="2025-07-15T11:21:28.094511481Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:28.104522 kubelet[1753]: W0715 11:21:28.104473 1753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jul 15 11:21:28.104778 kubelet[1753]: E0715 11:21:28.104527 1753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:21:28.123210 kubelet[1753]: W0715 11:21:28.123153 1753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jul 15 11:21:28.123309 kubelet[1753]: E0715 11:21:28.123223 1753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:21:28.124419 env[1322]: time="2025-07-15T11:21:28.124354226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:21:28.124419 env[1322]: time="2025-07-15T11:21:28.124392172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:21:28.124575 env[1322]: time="2025-07-15T11:21:28.124409787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:21:28.124879 env[1322]: time="2025-07-15T11:21:28.124830629Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/088cb4c7c72a8d5e7bd7045c244dfa71fe34a0a5a8921a99e2c7199a7d359398 pid=1811 runtime=io.containerd.runc.v2 Jul 15 11:21:28.125532 env[1322]: time="2025-07-15T11:21:28.125464608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:21:28.125532 env[1322]: time="2025-07-15T11:21:28.125500637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:21:28.125532 env[1322]: time="2025-07-15T11:21:28.125510343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:21:28.125879 env[1322]: time="2025-07-15T11:21:28.125807880Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97a0b382be3c612368814eb968cdc29e388ce29c3a9b532f7a2012111ada8ef8 pid=1817 runtime=io.containerd.runc.v2 Jul 15 11:21:28.126696 env[1322]: time="2025-07-15T11:21:28.126642294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:21:28.126696 env[1322]: time="2025-07-15T11:21:28.126672611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:21:28.126696 env[1322]: time="2025-07-15T11:21:28.126682597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:21:28.127331 env[1322]: time="2025-07-15T11:21:28.127279668Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/582889d72d4dee3940dfb2660ee4c084f6f27a30a6fe61b7e0e8ff4d7cd2ce1a pid=1812 runtime=io.containerd.runc.v2 Jul 15 11:21:28.208461 env[1322]: time="2025-07-15T11:21:28.208415513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"97a0b382be3c612368814eb968cdc29e388ce29c3a9b532f7a2012111ada8ef8\"" Jul 15 11:21:28.208721 env[1322]: time="2025-07-15T11:21:28.208692559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"088cb4c7c72a8d5e7bd7045c244dfa71fe34a0a5a8921a99e2c7199a7d359398\"" Jul 15 11:21:28.209301 kubelet[1753]: E0715 11:21:28.209277 1753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:28.210784 env[1322]: time="2025-07-15T11:21:28.210756585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8ba523266aa451dbbff595c1e7bcd6f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"582889d72d4dee3940dfb2660ee4c084f6f27a30a6fe61b7e0e8ff4d7cd2ce1a\"" Jul 15 11:21:28.211475 kubelet[1753]: E0715 11:21:28.211452 1753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:28.211885 kubelet[1753]: E0715 11:21:28.211866 1753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:28.213044 env[1322]: time="2025-07-15T11:21:28.213009543Z" level=info msg="CreateContainer within sandbox \"97a0b382be3c612368814eb968cdc29e388ce29c3a9b532f7a2012111ada8ef8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 11:21:28.213790 env[1322]: time="2025-07-15T11:21:28.213664412Z" level=info msg="CreateContainer within sandbox \"088cb4c7c72a8d5e7bd7045c244dfa71fe34a0a5a8921a99e2c7199a7d359398\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 11:21:28.214034 env[1322]: time="2025-07-15T11:21:28.213918851Z" level=info msg="CreateContainer within sandbox \"582889d72d4dee3940dfb2660ee4c084f6f27a30a6fe61b7e0e8ff4d7cd2ce1a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 11:21:28.228211 env[1322]: time="2025-07-15T11:21:28.228165962Z" level=info msg="CreateContainer within sandbox \"088cb4c7c72a8d5e7bd7045c244dfa71fe34a0a5a8921a99e2c7199a7d359398\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b128482bd95fbe8808bf0c830b3b48d002fb98334683046f6d69ef5c033b9ddb\"" Jul 15 11:21:28.228802 env[1322]: time="2025-07-15T11:21:28.228769544Z" level=info msg="StartContainer for \"b128482bd95fbe8808bf0c830b3b48d002fb98334683046f6d69ef5c033b9ddb\"" Jul 15 11:21:28.230160 env[1322]: time="2025-07-15T11:21:28.230123540Z" level=info msg="CreateContainer within sandbox \"97a0b382be3c612368814eb968cdc29e388ce29c3a9b532f7a2012111ada8ef8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4166a059856bb3fad279c850e813a2710e4dce7e32ae4e6e04da151f5432f11d\"" Jul 15 11:21:28.230564 env[1322]: time="2025-07-15T11:21:28.230488381Z" level=info msg="StartContainer for \"4166a059856bb3fad279c850e813a2710e4dce7e32ae4e6e04da151f5432f11d\"" Jul 15 11:21:28.232205 env[1322]: time="2025-07-15T11:21:28.232163560Z" level=info msg="CreateContainer within sandbox \"582889d72d4dee3940dfb2660ee4c084f6f27a30a6fe61b7e0e8ff4d7cd2ce1a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"71043f02f04b81d8103a8e84a50282a48c7a2b0928e43096df5f0876cc17814f\"" Jul 15 11:21:28.232637 env[1322]: time="2025-07-15T11:21:28.232606890Z" level=info msg="StartContainer for \"71043f02f04b81d8103a8e84a50282a48c7a2b0928e43096df5f0876cc17814f\"" Jul 15 11:21:28.255930 kubelet[1753]: E0715 11:21:28.255877 1753 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="1.6s" Jul 15 11:21:28.312815 env[1322]: time="2025-07-15T11:21:28.312767600Z" level=info msg="StartContainer for \"71043f02f04b81d8103a8e84a50282a48c7a2b0928e43096df5f0876cc17814f\" returns successfully" Jul 15 11:21:28.315333 env[1322]: time="2025-07-15T11:21:28.315301399Z" level=info msg="StartContainer for \"b128482bd95fbe8808bf0c830b3b48d002fb98334683046f6d69ef5c033b9ddb\" returns successfully" Jul 15 11:21:28.350051 env[1322]: time="2025-07-15T11:21:28.349970964Z" level=info msg="StartContainer for \"4166a059856bb3fad279c850e813a2710e4dce7e32ae4e6e04da151f5432f11d\" returns successfully" Jul 15 11:21:28.498138 kubelet[1753]: I0715 11:21:28.498103 1753 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:21:28.873191 kubelet[1753]: E0715 11:21:28.873168 1753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:28.875778 kubelet[1753]: E0715 11:21:28.875757 1753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:28.877178 kubelet[1753]: E0715 11:21:28.877158 1753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:29.879295 kubelet[1753]: E0715 11:21:29.879271 1753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:30.105995 kubelet[1753]: E0715 11:21:30.105965 1753 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 11:21:30.177042 kubelet[1753]: I0715 11:21:30.176937 1753 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 11:21:30.177042 kubelet[1753]: E0715 11:21:30.176972 1753 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 11:21:30.824898 kubelet[1753]: I0715 11:21:30.824862 1753 apiserver.go:52] "Watching apiserver" Jul 15 11:21:30.849963 kubelet[1753]: I0715 11:21:30.849936 1753 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 11:21:32.247708 systemd[1]: Reloading. Jul 15 11:21:32.292340 /usr/lib/systemd/system-generators/torcx-generator[2052]: time="2025-07-15T11:21:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:21:32.292369 /usr/lib/systemd/system-generators/torcx-generator[2052]: time="2025-07-15T11:21:32Z" level=info msg="torcx already run" Jul 15 11:21:32.353048 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:21:32.353065 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:21:32.368183 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:21:32.435156 kubelet[1753]: I0715 11:21:32.435125 1753 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:21:32.435429 systemd[1]: Stopping kubelet.service... Jul 15 11:21:32.458205 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 11:21:32.458477 systemd[1]: Stopped kubelet.service. Jul 15 11:21:32.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:32.459012 kernel: kauditd_printk_skb: 43 callbacks suppressed Jul 15 11:21:32.459056 kernel: audit: type=1131 audit(1752578492.457:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:32.460651 systemd[1]: Starting kubelet.service... Jul 15 11:21:32.551366 systemd[1]: Started kubelet.service. Jul 15 11:21:32.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:32.554046 kernel: audit: type=1130 audit(1752578492.550:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:32.589015 kubelet[2105]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:21:32.589015 kubelet[2105]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 11:21:32.589015 kubelet[2105]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:21:32.589384 kubelet[2105]: I0715 11:21:32.589063 2105 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:21:32.596540 kubelet[2105]: I0715 11:21:32.596503 2105 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 11:21:32.596540 kubelet[2105]: I0715 11:21:32.596533 2105 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:21:32.596757 kubelet[2105]: I0715 11:21:32.596732 2105 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 11:21:32.597999 kubelet[2105]: I0715 11:21:32.597972 2105 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 11:21:32.599719 kubelet[2105]: I0715 11:21:32.599688 2105 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:21:32.603027 kubelet[2105]: E0715 11:21:32.602996 2105 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:21:32.603078 kubelet[2105]: I0715 11:21:32.603029 2105 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:21:32.605739 kubelet[2105]: I0715 11:21:32.605701 2105 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:21:32.606270 kubelet[2105]: I0715 11:21:32.606249 2105 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 11:21:32.606381 kubelet[2105]: I0715 11:21:32.606355 2105 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:21:32.606535 kubelet[2105]: I0715 11:21:32.606377 2105 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 15 11:21:32.606617 kubelet[2105]: I0715 11:21:32.606538 2105 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:21:32.606617 kubelet[2105]: I0715 11:21:32.606549 2105 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 11:21:32.606617 kubelet[2105]: I0715 11:21:32.606578 2105 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:21:32.606678 kubelet[2105]: I0715 11:21:32.606667 2105 kubelet.go:408] "Attempting to sync node with API server" Jul 15 11:21:32.606704 kubelet[2105]: I0715 11:21:32.606678 2105 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:21:32.606704 kubelet[2105]: I0715 11:21:32.606695 2105 kubelet.go:314] "Adding apiserver pod source" Jul 15 11:21:32.606743 kubelet[2105]: I0715 11:21:32.606712 2105 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:21:32.607472 kubelet[2105]: I0715 11:21:32.607444 2105 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:21:32.608004 kubelet[2105]: I0715 11:21:32.607980 2105 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 11:21:32.608381 kubelet[2105]: I0715 11:21:32.608358 2105 server.go:1274] "Started kubelet" Jul 15 11:21:32.609644 kubelet[2105]: I0715 11:21:32.609538 2105 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:21:32.609805 kubelet[2105]: I0715 11:21:32.609767 2105 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:21:32.609878 kubelet[2105]: I0715 11:21:32.609820 2105 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:21:32.610538 kubelet[2105]: I0715 11:21:32.610508 2105 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 15 11:21:32.610618 kubelet[2105]: I0715 11:21:32.610542 2105 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 15 11:21:32.610618 kubelet[2105]: I0715 11:21:32.610568 2105 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:21:32.610681 kubelet[2105]: I0715 11:21:32.610622 2105 server.go:449] "Adding debug handlers to kubelet server" Jul 15 11:21:32.611442 kubelet[2105]: I0715 11:21:32.611402 2105 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 11:21:32.611524 kubelet[2105]: I0715 11:21:32.611512 2105 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 11:21:32.611613 kubelet[2105]: I0715 11:21:32.611585 2105 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:21:32.611613 kubelet[2105]: I0715 11:21:32.611613 2105 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:21:32.609000 audit[2105]: AVC avc: denied { mac_admin } for pid=2105 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:21:32.613947 kubelet[2105]: E0715 11:21:32.613921 2105 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:21:32.614322 kubelet[2105]: I0715 11:21:32.614292 2105 factory.go:221] Registration of the systemd container factory successfully Jul 15 11:21:32.614414 kubelet[2105]: I0715 11:21:32.614391 2105 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:21:32.615567 kubelet[2105]: I0715 11:21:32.615547 2105 factory.go:221] Registration of the containerd container factory successfully Jul 15 11:21:32.609000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:21:32.616470 kernel: audit: type=1400 audit(1752578492.609:224): avc: denied { mac_admin } for pid=2105 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:21:32.616510 kernel: audit: type=1401 audit(1752578492.609:224): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:21:32.616526 kernel: audit: type=1300 audit(1752578492.609:224): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b1d620 a1=4000b087f8 a2=4000b1d5f0 a3=25 items=0 ppid=1 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:32.609000 audit[2105]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b1d620 a1=4000b087f8 a2=4000b1d5f0 a3=25 items=0 ppid=1 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:32.618321 kubelet[2105]: E0715 11:21:32.618296 2105 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:21:32.609000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:21:32.626243 kernel: audit: type=1327 audit(1752578492.609:224): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:21:32.609000 audit[2105]: AVC avc: denied { mac_admin } for pid=2105 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:21:32.627990 kernel: audit: type=1400 audit(1752578492.609:225): avc: denied { mac_admin } for pid=2105 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:21:32.628034 kernel: audit: type=1401 audit(1752578492.609:225): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:21:32.609000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:21:32.609000 audit[2105]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40008a8d60 a1=4000b08810 a2=4000b1d6b0 a3=25 items=0 ppid=1 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:32.631691 kernel: audit: type=1300 audit(1752578492.609:225): arch=c00000b7 syscall=5 success=no exit=-22 a0=40008a8d60 a1=4000b08810 a2=4000b1d6b0 a3=25 items=0 ppid=1 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:32.609000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:21:32.636027 kernel: audit: type=1327 audit(1752578492.609:225): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:21:32.647114 kubelet[2105]: I0715 11:21:32.647076 2105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 11:21:32.648542 kubelet[2105]: I0715 11:21:32.648330 2105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 11:21:32.648542 kubelet[2105]: I0715 11:21:32.648372 2105 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 11:21:32.648542 kubelet[2105]: I0715 11:21:32.648400 2105 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 11:21:32.648542 kubelet[2105]: E0715 11:21:32.648460 2105 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:21:32.678907 kubelet[2105]: I0715 11:21:32.678876 2105 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 11:21:32.678907 kubelet[2105]: I0715 11:21:32.678898 2105 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 11:21:32.679011 kubelet[2105]: I0715 11:21:32.678916 2105 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:21:32.679088 kubelet[2105]: I0715 11:21:32.679061 2105 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 11:21:32.679119 kubelet[2105]: I0715 11:21:32.679080 2105 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 11:21:32.679119 kubelet[2105]: I0715 11:21:32.679098 2105 policy_none.go:49] "None policy: Start" Jul 15 11:21:32.679802 kubelet[2105]: I0715 11:21:32.679766 2105 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 11:21:32.679802 kubelet[2105]: I0715 11:21:32.679795 2105 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:21:32.679979 kubelet[2105]: I0715 11:21:32.679954 2105 state_mem.go:75] "Updated machine memory state" Jul 15 11:21:32.681398 kubelet[2105]: I0715 11:21:32.681368 2105 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 11:21:32.680000 audit[2105]: AVC avc: denied { mac_admin } for pid=2105 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:21:32.680000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:21:32.680000 audit[2105]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000e762d0 a1=4000e3acc0 a2=4000e762a0 a3=25 items=0 ppid=1 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:32.680000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:21:32.681669 kubelet[2105]: I0715 11:21:32.681432 2105 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 15 11:21:32.681669 kubelet[2105]: I0715 11:21:32.681558 2105 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:21:32.681669 kubelet[2105]: I0715 11:21:32.681569 2105 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:21:32.681923 kubelet[2105]: I0715 11:21:32.681900 2105 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:21:32.784602 kubelet[2105]: I0715 11:21:32.784552 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:21:32.790571 kubelet[2105]: I0715 11:21:32.790546 2105 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 15 11:21:32.790703 kubelet[2105]: I0715 11:21:32.790691 2105 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 11:21:32.913249 kubelet[2105]: I0715 11:21:32.913154 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:21:32.913249 kubelet[2105]: I0715 11:21:32.913199 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:21:32.913249 kubelet[2105]: I0715 11:21:32.913222 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:21:32.913249 kubelet[2105]: I0715 11:21:32.913240 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:21:32.913398 kubelet[2105]: I0715 11:21:32.913257 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:21:32.913398 kubelet[2105]: I0715 11:21:32.913272 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:21:32.913398 kubelet[2105]: I0715 11:21:32.913286 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ba523266aa451dbbff595c1e7bcd6f5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8ba523266aa451dbbff595c1e7bcd6f5\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:21:32.913398 kubelet[2105]: I0715 11:21:32.913300 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ba523266aa451dbbff595c1e7bcd6f5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8ba523266aa451dbbff595c1e7bcd6f5\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:21:32.913398 kubelet[2105]: I0715 11:21:32.913315 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ba523266aa451dbbff595c1e7bcd6f5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8ba523266aa451dbbff595c1e7bcd6f5\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:21:33.057993 kubelet[2105]: E0715 11:21:33.057969 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:33.058150 kubelet[2105]: E0715 11:21:33.057970 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:33.058259 kubelet[2105]: E0715 11:21:33.058215 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:33.607196 kubelet[2105]: I0715 11:21:33.607164 2105 apiserver.go:52] "Watching apiserver" Jul 15 11:21:33.612436 kubelet[2105]: I0715 11:21:33.612414 2105 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 11:21:33.666815 kubelet[2105]: E0715 11:21:33.666774 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:33.667444 kubelet[2105]: E0715 11:21:33.667403 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:33.672548 kubelet[2105]: E0715 11:21:33.672521 2105 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 11:21:33.672785 kubelet[2105]: E0715 11:21:33.672760 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:33.692310 kubelet[2105]: I0715 11:21:33.692248 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6922336 podStartE2EDuration="1.6922336s" podCreationTimestamp="2025-07-15 11:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:21:33.68519237 +0000 UTC m=+1.130697545" watchObservedRunningTime="2025-07-15 11:21:33.6922336 +0000 UTC m=+1.137738775" Jul 15 11:21:33.698824 kubelet[2105]: I0715 11:21:33.698771 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6987608060000001 podStartE2EDuration="1.698760806s" podCreationTimestamp="2025-07-15 11:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:21:33.692650577 +0000 UTC m=+1.138155752" watchObservedRunningTime="2025-07-15 11:21:33.698760806 +0000 UTC m=+1.144265980" Jul 15 11:21:34.667454 kubelet[2105]: E0715 11:21:34.667422 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:35.669244 kubelet[2105]: E0715 11:21:35.669205 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:37.592172 kubelet[2105]: I0715 11:21:37.592146 2105 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 11:21:37.592887 env[1322]: time="2025-07-15T11:21:37.592837096Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 11:21:37.593148 kubelet[2105]: I0715 11:21:37.593033 2105 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 11:21:38.500548 kubelet[2105]: I0715 11:21:38.500489 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.500472569 podStartE2EDuration="6.500472569s" podCreationTimestamp="2025-07-15 11:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:21:33.699191172 +0000 UTC m=+1.144696347" watchObservedRunningTime="2025-07-15 11:21:38.500472569 +0000 UTC m=+5.945977704" Jul 15 11:21:38.550789 kubelet[2105]: I0715 11:21:38.550756 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0020c441-a9a0-4f8e-b641-29a7a00c3804-kube-proxy\") pod \"kube-proxy-7qd6m\" (UID: \"0020c441-a9a0-4f8e-b641-29a7a00c3804\") " pod="kube-system/kube-proxy-7qd6m" Jul 15 11:21:38.550990 kubelet[2105]: I0715 11:21:38.550972 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0020c441-a9a0-4f8e-b641-29a7a00c3804-xtables-lock\") pod \"kube-proxy-7qd6m\" (UID: \"0020c441-a9a0-4f8e-b641-29a7a00c3804\") " pod="kube-system/kube-proxy-7qd6m" Jul 15 11:21:38.551066 kubelet[2105]: I0715 11:21:38.551052 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0020c441-a9a0-4f8e-b641-29a7a00c3804-lib-modules\") pod \"kube-proxy-7qd6m\" (UID: \"0020c441-a9a0-4f8e-b641-29a7a00c3804\") " pod="kube-system/kube-proxy-7qd6m" Jul 15 11:21:38.551167 kubelet[2105]: I0715 11:21:38.551151 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbr2g\" (UniqueName: \"kubernetes.io/projected/0020c441-a9a0-4f8e-b641-29a7a00c3804-kube-api-access-kbr2g\") pod \"kube-proxy-7qd6m\" (UID: \"0020c441-a9a0-4f8e-b641-29a7a00c3804\") " pod="kube-system/kube-proxy-7qd6m" Jul 15 11:21:38.652167 kubelet[2105]: I0715 11:21:38.652127 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8fc7e060-33ef-49aa-be45-0cbbaf7d8636-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-qb5mq\" (UID: \"8fc7e060-33ef-49aa-be45-0cbbaf7d8636\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-qb5mq" Jul 15 11:21:38.652167 kubelet[2105]: I0715 11:21:38.652166 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45vn2\" (UniqueName: \"kubernetes.io/projected/8fc7e060-33ef-49aa-be45-0cbbaf7d8636-kube-api-access-45vn2\") pod \"tigera-operator-5bf8dfcb4-qb5mq\" (UID: \"8fc7e060-33ef-49aa-be45-0cbbaf7d8636\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-qb5mq" Jul 15 11:21:38.659628 kubelet[2105]: I0715 11:21:38.659603 2105 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 15 11:21:38.804610 kubelet[2105]: E0715 11:21:38.804536 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:38.805238 env[1322]: time="2025-07-15T11:21:38.805201308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7qd6m,Uid:0020c441-a9a0-4f8e-b641-29a7a00c3804,Namespace:kube-system,Attempt:0,}" Jul 15 11:21:38.820019 env[1322]: time="2025-07-15T11:21:38.819959889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:21:38.820116 env[1322]: time="2025-07-15T11:21:38.820000736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:21:38.820116 env[1322]: time="2025-07-15T11:21:38.820011018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:21:38.820224 env[1322]: time="2025-07-15T11:21:38.820129679Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa0877edd5a73e4d87520c5cc1ea50261b8141b449809f06b56f9fc9bf8d13ae pid=2164 runtime=io.containerd.runc.v2 Jul 15 11:21:38.863307 env[1322]: time="2025-07-15T11:21:38.863271665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7qd6m,Uid:0020c441-a9a0-4f8e-b641-29a7a00c3804,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa0877edd5a73e4d87520c5cc1ea50261b8141b449809f06b56f9fc9bf8d13ae\"" Jul 15 11:21:38.864159 kubelet[2105]: E0715 11:21:38.864125 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:38.866960 env[1322]: time="2025-07-15T11:21:38.866930585Z" level=info msg="CreateContainer within sandbox \"aa0877edd5a73e4d87520c5cc1ea50261b8141b449809f06b56f9fc9bf8d13ae\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 11:21:38.880357 env[1322]: time="2025-07-15T11:21:38.880323727Z" level=info msg="CreateContainer within sandbox \"aa0877edd5a73e4d87520c5cc1ea50261b8141b449809f06b56f9fc9bf8d13ae\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6948dfa711ef2a35b2fbf58d5837b79aa9d99b3c1844ed125a5dec46db3e0617\"" Jul 15 11:21:38.880969 env[1322]: time="2025-07-15T11:21:38.880914910Z" level=info msg="StartContainer for \"6948dfa711ef2a35b2fbf58d5837b79aa9d99b3c1844ed125a5dec46db3e0617\"" Jul 15 11:21:38.914268 env[1322]: time="2025-07-15T11:21:38.914151084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-qb5mq,Uid:8fc7e060-33ef-49aa-be45-0cbbaf7d8636,Namespace:tigera-operator,Attempt:0,}" Jul 15 11:21:38.938902 env[1322]: time="2025-07-15T11:21:38.938824439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:21:38.938994 env[1322]: time="2025-07-15T11:21:38.938909854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:21:38.938994 env[1322]: time="2025-07-15T11:21:38.938944580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:21:38.939153 env[1322]: time="2025-07-15T11:21:38.939115010Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdb309fad70915e384f721c23a30aba556fbcc6a8ac8e29f5b0ec845d976181c pid=2233 runtime=io.containerd.runc.v2 Jul 15 11:21:38.946122 env[1322]: time="2025-07-15T11:21:38.946076988Z" level=info msg="StartContainer for \"6948dfa711ef2a35b2fbf58d5837b79aa9d99b3c1844ed125a5dec46db3e0617\" returns successfully" Jul 15 11:21:38.997519 env[1322]: time="2025-07-15T11:21:38.997465256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-qb5mq,Uid:8fc7e060-33ef-49aa-be45-0cbbaf7d8636,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fdb309fad70915e384f721c23a30aba556fbcc6a8ac8e29f5b0ec845d976181c\"" Jul 15 11:21:39.001614 env[1322]: time="2025-07-15T11:21:39.001575848Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 11:21:39.153000 audit[2307]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2307 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.156168 kernel: kauditd_printk_skb: 4 callbacks suppressed Jul 15 11:21:39.156241 kernel: audit: type=1325 audit(1752578499.153:227): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2307 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.156265 kernel: audit: type=1300 audit(1752578499.153:227): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd9407650 a2=0 a3=1 items=0 ppid=2215 pid=2307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.153000 audit[2307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd9407650 a2=0 a3=1 items=0 ppid=2215 pid=2307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.158701 kernel: audit: type=1327 audit(1752578499.153:227): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 15 11:21:39.153000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 15 11:21:39.153000 audit[2306]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2306 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.161453 kernel: audit: type=1325 audit(1752578499.153:228): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2306 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.161507 kernel: audit: type=1300 audit(1752578499.153:228): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc5f645a0 a2=0 a3=1 items=0 ppid=2215 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.153000 audit[2306]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc5f645a0 a2=0 a3=1 items=0 ppid=2215 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.153000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 15 11:21:39.165379 kernel: audit: type=1327 audit(1752578499.153:228): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 15 11:21:39.165428 kernel: audit: type=1325 audit(1752578499.161:229): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2308 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.161000 audit[2308]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2308 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.161000 audit[2308]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe503e7f0 a2=0 a3=1 items=0 ppid=2215 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.169189 kernel: audit: type=1300 audit(1752578499.161:229): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe503e7f0 a2=0 a3=1 items=0 ppid=2215 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.169218 kernel: audit: type=1327 audit(1752578499.161:229): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 15 11:21:39.161000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 15 11:21:39.170394 kernel: audit: type=1325 audit(1752578499.162:230): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.162000 audit[2309]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.162000 audit[2309]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc3de7ac0 a2=0 a3=1 items=0 ppid=2215 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.162000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 15 11:21:39.162000 audit[2310]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2310 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.162000 audit[2310]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd243c620 a2=0 a3=1 items=0 ppid=2215 pid=2310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.162000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 15 11:21:39.163000 audit[2311]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.163000 audit[2311]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff3919af0 a2=0 a3=1 items=0 ppid=2215 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.163000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 15 11:21:39.255000 audit[2312]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.255000 audit[2312]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc0476350 a2=0 a3=1 items=0 ppid=2215 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.255000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 15 11:21:39.259000 audit[2314]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2314 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.259000 audit[2314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff7bb0e00 a2=0 a3=1 items=0 ppid=2215 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 15 11:21:39.264000 audit[2317]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.264000 audit[2317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcc188580 a2=0 a3=1 items=0 ppid=2215 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.264000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 15 11:21:39.265000 audit[2318]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.265000 audit[2318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff2d83960 a2=0 a3=1 items=0 ppid=2215 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.265000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 15 11:21:39.267000 audit[2320]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.267000 audit[2320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffefd0cf00 a2=0 a3=1 items=0 ppid=2215 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 15 11:21:39.268000 audit[2321]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.268000 audit[2321]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffecb1b810 a2=0 a3=1 items=0 ppid=2215 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.268000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 15 11:21:39.271000 audit[2323]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.271000 audit[2323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffff489bb0 a2=0 a3=1 items=0 ppid=2215 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.271000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 15 11:21:39.274000 audit[2326]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.274000 audit[2326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe64b1590 a2=0 a3=1 items=0 ppid=2215 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.274000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 15 11:21:39.275000 audit[2327]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.275000 audit[2327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6f3ac10 a2=0 a3=1 items=0 ppid=2215 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 15 11:21:39.277000 audit[2329]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.277000 audit[2329]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd0ca60c0 a2=0 a3=1 items=0 ppid=2215 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.277000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 15 11:21:39.278000 audit[2330]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.278000 audit[2330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffda4d5340 a2=0 a3=1 items=0 ppid=2215 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.278000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 15 11:21:39.280000 audit[2332]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.280000 audit[2332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff93239e0 a2=0 a3=1 items=0 ppid=2215 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.280000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 15 11:21:39.283000 audit[2335]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.283000 audit[2335]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcb1d3c30 a2=0 a3=1 items=0 ppid=2215 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.283000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 15 11:21:39.286000 audit[2338]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2338 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.286000 audit[2338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe7695e70 a2=0 a3=1 items=0 ppid=2215 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.286000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 15 11:21:39.287000 audit[2339]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.287000 audit[2339]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff9092dd0 a2=0 a3=1 items=0 ppid=2215 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.287000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 15 11:21:39.289000 audit[2341]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.289000 audit[2341]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff33f2950 a2=0 a3=1 items=0 ppid=2215 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.289000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 15 11:21:39.292000 audit[2344]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2344 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.292000 audit[2344]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe12a5140 a2=0 a3=1 items=0 ppid=2215 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.292000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 15 11:21:39.293000 audit[2345]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2345 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.293000 audit[2345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc4f24c0 a2=0 a3=1 items=0 ppid=2215 pid=2345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.293000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 15 11:21:39.296207 kubelet[2105]: E0715 11:21:39.296176 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:39.297000 audit[2347]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:21:39.297000 audit[2347]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffe335ff30 a2=0 a3=1 items=0 ppid=2215 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.297000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 15 11:21:39.326000 audit[2353]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:21:39.326000 audit[2353]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff782bf90 a2=0 a3=1 items=0 ppid=2215 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.326000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:21:39.341000 audit[2353]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:21:39.341000 audit[2353]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff782bf90 a2=0 a3=1 items=0 ppid=2215 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.341000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:21:39.342000 audit[2358]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.342000 audit[2358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcaf59120 a2=0 a3=1 items=0 ppid=2215 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.342000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 15 11:21:39.344000 audit[2360]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.344000 audit[2360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe87d4f90 a2=0 a3=1 items=0 ppid=2215 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.344000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 15 11:21:39.348000 audit[2363]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2363 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.348000 audit[2363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffde747e20 a2=0 a3=1 items=0 ppid=2215 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.348000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 15 11:21:39.349000 audit[2364]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2364 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.349000 audit[2364]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca36d080 a2=0 a3=1 items=0 ppid=2215 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.349000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 15 11:21:39.351000 audit[2366]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2366 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.351000 audit[2366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffce5d55f0 a2=0 a3=1 items=0 ppid=2215 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.351000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 15 11:21:39.352000 audit[2367]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2367 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.352000 audit[2367]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff28d44a0 a2=0 a3=1 items=0 ppid=2215 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.352000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 15 11:21:39.354000 audit[2369]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2369 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.354000 audit[2369]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff0277000 a2=0 a3=1 items=0 ppid=2215 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.354000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 15 11:21:39.357000 audit[2372]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2372 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.357000 audit[2372]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd34aedc0 a2=0 a3=1 items=0 ppid=2215 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.357000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 15 11:21:39.358000 audit[2373]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.358000 audit[2373]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe103e720 a2=0 a3=1 items=0 ppid=2215 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.358000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 15 11:21:39.361000 audit[2375]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2375 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.361000 audit[2375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffeee30840 a2=0 a3=1 items=0 ppid=2215 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.361000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 15 11:21:39.362000 audit[2376]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.362000 audit[2376]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe702a070 a2=0 a3=1 items=0 ppid=2215 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.362000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 15 11:21:39.364000 audit[2378]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.364000 audit[2378]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe1b4bba0 a2=0 a3=1 items=0 ppid=2215 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.364000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 15 11:21:39.367000 audit[2381]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.367000 audit[2381]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc4c15c30 a2=0 a3=1 items=0 ppid=2215 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.367000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 15 11:21:39.370000 audit[2384]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.370000 audit[2384]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc2c91260 a2=0 a3=1 items=0 ppid=2215 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.370000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 15 11:21:39.371000 audit[2385]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.371000 audit[2385]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd8937190 a2=0 a3=1 items=0 ppid=2215 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.371000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 15 11:21:39.374000 audit[2387]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.374000 audit[2387]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffe63e9b10 a2=0 a3=1 items=0 ppid=2215 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.374000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 15 11:21:39.377000 audit[2390]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.377000 audit[2390]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffff5cbb320 a2=0 a3=1 items=0 ppid=2215 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.377000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 15 11:21:39.378000 audit[2391]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.378000 audit[2391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc546aa0 a2=0 a3=1 items=0 ppid=2215 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.378000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 15 11:21:39.381000 audit[2393]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.381000 audit[2393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff5627730 a2=0 a3=1 items=0 ppid=2215 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.381000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 15 11:21:39.383000 audit[2394]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.383000 audit[2394]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff25ffdc0 a2=0 a3=1 items=0 ppid=2215 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.383000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 15 11:21:39.386000 audit[2396]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.386000 audit[2396]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc4821e50 a2=0 a3=1 items=0 ppid=2215 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.386000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 15 11:21:39.389000 audit[2399]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:21:39.389000 audit[2399]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd56f6b10 a2=0 a3=1 items=0 ppid=2215 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.389000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 15 11:21:39.392000 audit[2401]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2401 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 15 11:21:39.392000 audit[2401]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffe4e6d720 a2=0 a3=1 items=0 ppid=2215 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.392000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:21:39.392000 audit[2401]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 15 11:21:39.392000 audit[2401]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffe4e6d720 a2=0 a3=1 items=0 ppid=2215 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:39.392000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:21:39.677423 kubelet[2105]: E0715 11:21:39.677379 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:39.677725 kubelet[2105]: E0715 11:21:39.677652 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:39.687334 kubelet[2105]: I0715 11:21:39.687158 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7qd6m" podStartSLOduration=1.687135195 podStartE2EDuration="1.687135195s" podCreationTimestamp="2025-07-15 11:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:21:39.686599706 +0000 UTC m=+7.132104881" watchObservedRunningTime="2025-07-15 11:21:39.687135195 +0000 UTC m=+7.132640370" Jul 15 11:21:40.000905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3666500711.mount: Deactivated successfully. Jul 15 11:21:40.535560 env[1322]: time="2025-07-15T11:21:40.535510465Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:40.537089 env[1322]: time="2025-07-15T11:21:40.537055266Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:40.538584 env[1322]: time="2025-07-15T11:21:40.538559222Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:40.539808 env[1322]: time="2025-07-15T11:21:40.539778932Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:40.540781 env[1322]: time="2025-07-15T11:21:40.540461079Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 15 11:21:40.543520 env[1322]: time="2025-07-15T11:21:40.543429103Z" level=info msg="CreateContainer within sandbox \"fdb309fad70915e384f721c23a30aba556fbcc6a8ac8e29f5b0ec845d976181c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 11:21:40.554944 env[1322]: time="2025-07-15T11:21:40.554897816Z" level=info msg="CreateContainer within sandbox \"fdb309fad70915e384f721c23a30aba556fbcc6a8ac8e29f5b0ec845d976181c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c83318f50da184e72053680122d277fb9c3d1c246e41e6a32cb6087842af0809\"" Jul 15 11:21:40.555493 env[1322]: time="2025-07-15T11:21:40.555467105Z" level=info msg="StartContainer for \"c83318f50da184e72053680122d277fb9c3d1c246e41e6a32cb6087842af0809\"" Jul 15 11:21:40.606283 env[1322]: time="2025-07-15T11:21:40.606232603Z" level=info msg="StartContainer for \"c83318f50da184e72053680122d277fb9c3d1c246e41e6a32cb6087842af0809\" returns successfully" Jul 15 11:21:40.687205 kubelet[2105]: I0715 11:21:40.687150 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-qb5mq" podStartSLOduration=1.143888848 podStartE2EDuration="2.687135334s" podCreationTimestamp="2025-07-15 11:21:38 +0000 UTC" firstStartedPulling="2025-07-15 11:21:38.998395258 +0000 UTC m=+6.443900433" lastFinishedPulling="2025-07-15 11:21:40.541641784 +0000 UTC m=+7.987146919" observedRunningTime="2025-07-15 11:21:40.68698495 +0000 UTC m=+8.132490125" watchObservedRunningTime="2025-07-15 11:21:40.687135334 +0000 UTC m=+8.132640509" Jul 15 11:21:40.939446 kubelet[2105]: E0715 11:21:40.939353 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:41.680981 kubelet[2105]: E0715 11:21:41.680948 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:44.035609 kubelet[2105]: E0715 11:21:44.035564 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:46.233264 sudo[1485]: pam_unix(sudo:session): session closed for user root Jul 15 11:21:46.232000 audit[1485]: USER_END pid=1485 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:21:46.234212 kernel: kauditd_printk_skb: 143 callbacks suppressed Jul 15 11:21:46.234272 kernel: audit: type=1106 audit(1752578506.232:278): pid=1485 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:21:46.236127 sshd[1479]: pam_unix(sshd:session): session closed for user core Jul 15 11:21:46.232000 audit[1485]: CRED_DISP pid=1485 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:21:46.238738 kernel: audit: type=1104 audit(1752578506.232:279): pid=1485 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:21:46.238000 audit[1479]: USER_END pid=1479 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:21:46.238000 audit[1479]: CRED_DISP pid=1479 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:21:46.242491 systemd[1]: sshd@6-10.0.0.116:22-10.0.0.1:52124.service: Deactivated successfully. Jul 15 11:21:46.243528 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 11:21:46.244056 kernel: audit: type=1106 audit(1752578506.238:280): pid=1479 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:21:46.244113 kernel: audit: type=1104 audit(1752578506.238:281): pid=1479 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:21:46.244097 systemd-logind[1305]: Session 7 logged out. Waiting for processes to exit. Jul 15 11:21:46.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.116:22-10.0.0.1:52124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:46.246513 kernel: audit: type=1131 audit(1752578506.241:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.116:22-10.0.0.1:52124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:21:46.246760 systemd-logind[1305]: Removed session 7. Jul 15 11:21:47.066000 audit[2493]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:21:47.066000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffff4d3f50 a2=0 a3=1 items=0 ppid=2215 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:47.071625 kernel: audit: type=1325 audit(1752578507.066:283): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:21:47.071703 kernel: audit: type=1300 audit(1752578507.066:283): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffff4d3f50 a2=0 a3=1 items=0 ppid=2215 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:47.071725 kernel: audit: type=1327 audit(1752578507.066:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:21:47.066000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:21:47.077000 audit[2493]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:21:47.077000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffff4d3f50 a2=0 a3=1 items=0 ppid=2215 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:47.082932 kernel: audit: type=1325 audit(1752578507.077:284): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:21:47.082996 kernel: audit: type=1300 audit(1752578507.077:284): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffff4d3f50 a2=0 a3=1 items=0 ppid=2215 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:47.077000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:21:47.096000 audit[2495]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:21:47.096000 audit[2495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffccb899d0 a2=0 a3=1 items=0 ppid=2215 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:47.096000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:21:47.101000 audit[2495]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:21:47.101000 audit[2495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffccb899d0 a2=0 a3=1 items=0 ppid=2215 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:47.101000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:21:48.006924 update_engine[1308]: I0715 11:21:48.006877 1308 update_attempter.cc:509] Updating boot flags... Jul 15 11:21:50.091000 audit[2512]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:21:50.091000 audit[2512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe720dbf0 a2=0 a3=1 items=0 ppid=2215 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:50.091000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:21:50.095000 audit[2512]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:21:50.095000 audit[2512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe720dbf0 a2=0 a3=1 items=0 ppid=2215 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:50.095000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:21:50.140406 kubelet[2105]: I0715 11:21:50.140371 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cfa5fe19-8cfa-4627-a795-f77d218ad3b9-typha-certs\") pod \"calico-typha-5887c4c59c-bqlpw\" (UID: \"cfa5fe19-8cfa-4627-a795-f77d218ad3b9\") " pod="calico-system/calico-typha-5887c4c59c-bqlpw" Jul 15 11:21:50.140406 kubelet[2105]: I0715 11:21:50.140407 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfa5fe19-8cfa-4627-a795-f77d218ad3b9-tigera-ca-bundle\") pod \"calico-typha-5887c4c59c-bqlpw\" (UID: \"cfa5fe19-8cfa-4627-a795-f77d218ad3b9\") " pod="calico-system/calico-typha-5887c4c59c-bqlpw" Jul 15 11:21:50.140777 kubelet[2105]: I0715 11:21:50.140429 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vjsm\" (UniqueName: \"kubernetes.io/projected/cfa5fe19-8cfa-4627-a795-f77d218ad3b9-kube-api-access-2vjsm\") pod \"calico-typha-5887c4c59c-bqlpw\" (UID: \"cfa5fe19-8cfa-4627-a795-f77d218ad3b9\") " pod="calico-system/calico-typha-5887c4c59c-bqlpw" Jul 15 11:21:50.179000 audit[2514]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:21:50.179000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc1b7f350 a2=0 a3=1 items=0 ppid=2215 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:50.179000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:21:50.186000 audit[2514]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:21:50.186000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc1b7f350 a2=0 a3=1 items=0 ppid=2215 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:50.186000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:21:50.341625 kubelet[2105]: I0715 11:21:50.341589 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aea436da-44e4-41fe-803f-15e22a5a8c67-cni-log-dir\") pod \"calico-node-982vn\" (UID: \"aea436da-44e4-41fe-803f-15e22a5a8c67\") " pod="calico-system/calico-node-982vn" Jul 15 11:21:50.341750 kubelet[2105]: I0715 11:21:50.341621 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aea436da-44e4-41fe-803f-15e22a5a8c67-node-certs\") pod \"calico-node-982vn\" (UID: \"aea436da-44e4-41fe-803f-15e22a5a8c67\") " pod="calico-system/calico-node-982vn" Jul 15 11:21:50.341750 kubelet[2105]: I0715 11:21:50.341692 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aea436da-44e4-41fe-803f-15e22a5a8c67-cni-net-dir\") pod \"calico-node-982vn\" (UID: \"aea436da-44e4-41fe-803f-15e22a5a8c67\") " pod="calico-system/calico-node-982vn" Jul 15 11:21:50.341750 kubelet[2105]: I0715 11:21:50.341710 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8xvr\" (UniqueName: \"kubernetes.io/projected/aea436da-44e4-41fe-803f-15e22a5a8c67-kube-api-access-d8xvr\") pod \"calico-node-982vn\" (UID: \"aea436da-44e4-41fe-803f-15e22a5a8c67\") " pod="calico-system/calico-node-982vn" Jul 15 11:21:50.341750 kubelet[2105]: I0715 11:21:50.341729 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aea436da-44e4-41fe-803f-15e22a5a8c67-cni-bin-dir\") pod \"calico-node-982vn\" (UID: \"aea436da-44e4-41fe-803f-15e22a5a8c67\") " pod="calico-system/calico-node-982vn" Jul 15 11:21:50.341750 kubelet[2105]: I0715 11:21:50.341744 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aea436da-44e4-41fe-803f-15e22a5a8c67-policysync\") pod \"calico-node-982vn\" (UID: \"aea436da-44e4-41fe-803f-15e22a5a8c67\") " pod="calico-system/calico-node-982vn" Jul 15 11:21:50.341903 kubelet[2105]: I0715 11:21:50.341760 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aea436da-44e4-41fe-803f-15e22a5a8c67-flexvol-driver-host\") pod \"calico-node-982vn\" (UID: \"aea436da-44e4-41fe-803f-15e22a5a8c67\") " pod="calico-system/calico-node-982vn" Jul 15 11:21:50.341903 kubelet[2105]: I0715 11:21:50.341776 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aea436da-44e4-41fe-803f-15e22a5a8c67-lib-modules\") pod \"calico-node-982vn\" (UID: \"aea436da-44e4-41fe-803f-15e22a5a8c67\") " pod="calico-system/calico-node-982vn" Jul 15 11:21:50.341903 kubelet[2105]: I0715 11:21:50.341790 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aea436da-44e4-41fe-803f-15e22a5a8c67-tigera-ca-bundle\") pod \"calico-node-982vn\" (UID: \"aea436da-44e4-41fe-803f-15e22a5a8c67\") " pod="calico-system/calico-node-982vn" Jul 15 11:21:50.341903 kubelet[2105]: I0715 11:21:50.341805 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aea436da-44e4-41fe-803f-15e22a5a8c67-var-lib-calico\") pod \"calico-node-982vn\" (UID: \"aea436da-44e4-41fe-803f-15e22a5a8c67\") " pod="calico-system/calico-node-982vn" Jul 15 11:21:50.341903 kubelet[2105]: I0715 11:21:50.341820 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aea436da-44e4-41fe-803f-15e22a5a8c67-var-run-calico\") pod \"calico-node-982vn\" (UID: \"aea436da-44e4-41fe-803f-15e22a5a8c67\") " pod="calico-system/calico-node-982vn" Jul 15 11:21:50.342021 kubelet[2105]: I0715 11:21:50.341860 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aea436da-44e4-41fe-803f-15e22a5a8c67-xtables-lock\") pod \"calico-node-982vn\" (UID: \"aea436da-44e4-41fe-803f-15e22a5a8c67\") " pod="calico-system/calico-node-982vn" Jul 15 11:21:50.426612 kubelet[2105]: E0715 11:21:50.426510 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:50.427949 env[1322]: time="2025-07-15T11:21:50.427908179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5887c4c59c-bqlpw,Uid:cfa5fe19-8cfa-4627-a795-f77d218ad3b9,Namespace:calico-system,Attempt:0,}" Jul 15 11:21:50.443423 kubelet[2105]: E0715 11:21:50.443385 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.445148 kubelet[2105]: W0715 11:21:50.445116 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.445334 kubelet[2105]: E0715 11:21:50.445318 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.445922 kubelet[2105]: E0715 11:21:50.445625 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.445922 kubelet[2105]: W0715 11:21:50.445638 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.445922 kubelet[2105]: E0715 11:21:50.445653 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.446120 kubelet[2105]: E0715 11:21:50.446105 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.446192 env[1322]: time="2025-07-15T11:21:50.446068817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:21:50.446192 env[1322]: time="2025-07-15T11:21:50.446128702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:21:50.446192 env[1322]: time="2025-07-15T11:21:50.446153544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:21:50.446355 kubelet[2105]: W0715 11:21:50.446334 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.446531 env[1322]: time="2025-07-15T11:21:50.446487655Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb5e333d36100c296ccdc81d2958095401da0c1c3deb9db2850790c802efef90 pid=2524 runtime=io.containerd.runc.v2 Jul 15 11:21:50.446740 kubelet[2105]: E0715 11:21:50.446707 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.446978 kubelet[2105]: E0715 11:21:50.446963 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.447072 kubelet[2105]: W0715 11:21:50.447059 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.449921 kubelet[2105]: E0715 11:21:50.447195 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.450119 kubelet[2105]: E0715 11:21:50.450091 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.450216 kubelet[2105]: W0715 11:21:50.450202 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.450387 kubelet[2105]: E0715 11:21:50.450372 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.450619 kubelet[2105]: E0715 11:21:50.450597 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.450731 kubelet[2105]: W0715 11:21:50.450716 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.450836 kubelet[2105]: E0715 11:21:50.450822 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.451283 kubelet[2105]: E0715 11:21:50.451269 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.451417 kubelet[2105]: W0715 11:21:50.451403 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.451570 kubelet[2105]: E0715 11:21:50.451558 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.451800 kubelet[2105]: E0715 11:21:50.451789 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.451894 kubelet[2105]: W0715 11:21:50.451881 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.452063 kubelet[2105]: E0715 11:21:50.452050 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.452240 kubelet[2105]: E0715 11:21:50.452230 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.452314 kubelet[2105]: W0715 11:21:50.452300 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.452472 kubelet[2105]: E0715 11:21:50.452459 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.452598 kubelet[2105]: E0715 11:21:50.452586 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.452661 kubelet[2105]: W0715 11:21:50.452649 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.452786 kubelet[2105]: E0715 11:21:50.452776 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.453336 kubelet[2105]: E0715 11:21:50.453319 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.453458 kubelet[2105]: W0715 11:21:50.453443 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.453601 kubelet[2105]: E0715 11:21:50.453588 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.453824 kubelet[2105]: E0715 11:21:50.453812 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.453929 kubelet[2105]: W0715 11:21:50.453914 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.454094 kubelet[2105]: E0715 11:21:50.454064 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.454231 kubelet[2105]: E0715 11:21:50.454218 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.454299 kubelet[2105]: W0715 11:21:50.454287 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.454446 kubelet[2105]: E0715 11:21:50.454430 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.454563 kubelet[2105]: E0715 11:21:50.454551 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.454626 kubelet[2105]: W0715 11:21:50.454615 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.454764 kubelet[2105]: E0715 11:21:50.454750 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.454978 kubelet[2105]: E0715 11:21:50.454966 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.455076 kubelet[2105]: W0715 11:21:50.455063 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.455204 kubelet[2105]: E0715 11:21:50.455187 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.455433 kubelet[2105]: E0715 11:21:50.455420 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.455508 kubelet[2105]: W0715 11:21:50.455496 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.455591 kubelet[2105]: E0715 11:21:50.455578 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.455823 kubelet[2105]: E0715 11:21:50.455805 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.455920 kubelet[2105]: W0715 11:21:50.455906 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.456070 kubelet[2105]: E0715 11:21:50.456057 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.456285 kubelet[2105]: E0715 11:21:50.456266 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.456470 kubelet[2105]: W0715 11:21:50.456352 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.456601 kubelet[2105]: E0715 11:21:50.456572 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.456805 kubelet[2105]: E0715 11:21:50.456793 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.456958 kubelet[2105]: W0715 11:21:50.456944 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.457125 kubelet[2105]: E0715 11:21:50.457110 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.457285 kubelet[2105]: E0715 11:21:50.457244 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.457363 kubelet[2105]: W0715 11:21:50.457349 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.457509 kubelet[2105]: E0715 11:21:50.457495 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.457679 kubelet[2105]: E0715 11:21:50.457668 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.457758 kubelet[2105]: W0715 11:21:50.457744 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.457905 kubelet[2105]: E0715 11:21:50.457892 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.458045 kubelet[2105]: E0715 11:21:50.458034 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.458130 kubelet[2105]: W0715 11:21:50.458118 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.458280 kubelet[2105]: E0715 11:21:50.458265 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.459364 kubelet[2105]: E0715 11:21:50.459350 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.459508 kubelet[2105]: W0715 11:21:50.459493 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.459684 kubelet[2105]: E0715 11:21:50.459654 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.459830 kubelet[2105]: E0715 11:21:50.459811 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.460162 kubelet[2105]: W0715 11:21:50.459990 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.460162 kubelet[2105]: E0715 11:21:50.460025 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.460424 kubelet[2105]: E0715 11:21:50.460403 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.460595 kubelet[2105]: W0715 11:21:50.460513 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.460697 kubelet[2105]: E0715 11:21:50.460672 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.461118 kubelet[2105]: E0715 11:21:50.461095 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.461313 kubelet[2105]: W0715 11:21:50.461187 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.461313 kubelet[2105]: E0715 11:21:50.461228 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.461612 kubelet[2105]: E0715 11:21:50.461601 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.461735 kubelet[2105]: W0715 11:21:50.461722 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.461894 kubelet[2105]: E0715 11:21:50.461881 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.462222 kubelet[2105]: E0715 11:21:50.462209 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.462321 kubelet[2105]: W0715 11:21:50.462307 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.469984 kubelet[2105]: E0715 11:21:50.464713 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.469984 kubelet[2105]: E0715 11:21:50.467118 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.469984 kubelet[2105]: W0715 11:21:50.467132 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.469984 kubelet[2105]: E0715 11:21:50.467255 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.469984 kubelet[2105]: E0715 11:21:50.468991 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.469984 kubelet[2105]: W0715 11:21:50.469002 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.469984 kubelet[2105]: E0715 11:21:50.469037 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.469984 kubelet[2105]: E0715 11:21:50.469190 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.469984 kubelet[2105]: W0715 11:21:50.469200 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.469984 kubelet[2105]: E0715 11:21:50.469274 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.470321 kubelet[2105]: E0715 11:21:50.469343 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.470321 kubelet[2105]: W0715 11:21:50.469350 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.470321 kubelet[2105]: E0715 11:21:50.469445 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.470321 kubelet[2105]: E0715 11:21:50.469516 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.470321 kubelet[2105]: W0715 11:21:50.469522 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.470321 kubelet[2105]: E0715 11:21:50.469591 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.470741 kubelet[2105]: E0715 11:21:50.470710 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.470741 kubelet[2105]: W0715 11:21:50.470733 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.470823 kubelet[2105]: E0715 11:21:50.470756 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.473482 kubelet[2105]: E0715 11:21:50.473379 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.473482 kubelet[2105]: W0715 11:21:50.473406 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.473482 kubelet[2105]: E0715 11:21:50.473469 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.476730 kubelet[2105]: E0715 11:21:50.473698 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.476730 kubelet[2105]: W0715 11:21:50.473713 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.476730 kubelet[2105]: E0715 11:21:50.473815 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.476730 kubelet[2105]: E0715 11:21:50.473908 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.476730 kubelet[2105]: W0715 11:21:50.473916 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.476730 kubelet[2105]: E0715 11:21:50.473958 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.476730 kubelet[2105]: E0715 11:21:50.474175 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.476730 kubelet[2105]: W0715 11:21:50.474199 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.476730 kubelet[2105]: E0715 11:21:50.474214 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.477988 kubelet[2105]: E0715 11:21:50.477874 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.477988 kubelet[2105]: W0715 11:21:50.477893 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.477988 kubelet[2105]: E0715 11:21:50.477911 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.479578 kubelet[2105]: E0715 11:21:50.479554 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.479578 kubelet[2105]: W0715 11:21:50.479574 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.479688 kubelet[2105]: E0715 11:21:50.479588 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.500062 kubelet[2105]: E0715 11:21:50.498292 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h54b2" podUID="5caaf704-0a5d-4b3c-abd2-5b536ffec524" Jul 15 11:21:50.507523 kubelet[2105]: E0715 11:21:50.507489 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.507523 kubelet[2105]: W0715 11:21:50.507509 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.507649 kubelet[2105]: E0715 11:21:50.507529 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.530920 kubelet[2105]: E0715 11:21:50.530859 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.530920 kubelet[2105]: W0715 11:21:50.530879 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.530920 kubelet[2105]: E0715 11:21:50.530897 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.531139 kubelet[2105]: E0715 11:21:50.531092 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.531139 kubelet[2105]: W0715 11:21:50.531105 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.531139 kubelet[2105]: E0715 11:21:50.531115 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.531298 kubelet[2105]: E0715 11:21:50.531267 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.531298 kubelet[2105]: W0715 11:21:50.531281 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.531298 kubelet[2105]: E0715 11:21:50.531289 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.531488 kubelet[2105]: E0715 11:21:50.531464 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.531488 kubelet[2105]: W0715 11:21:50.531475 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.531488 kubelet[2105]: E0715 11:21:50.531485 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.531677 kubelet[2105]: E0715 11:21:50.531656 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.531677 kubelet[2105]: W0715 11:21:50.531669 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.531738 kubelet[2105]: E0715 11:21:50.531678 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.531830 kubelet[2105]: E0715 11:21:50.531812 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.531830 kubelet[2105]: W0715 11:21:50.531823 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.531912 kubelet[2105]: E0715 11:21:50.531831 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.531989 kubelet[2105]: E0715 11:21:50.531978 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.531989 kubelet[2105]: W0715 11:21:50.531988 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.532038 kubelet[2105]: E0715 11:21:50.531996 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.532130 kubelet[2105]: E0715 11:21:50.532119 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.532130 kubelet[2105]: W0715 11:21:50.532129 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.532189 kubelet[2105]: E0715 11:21:50.532136 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.532294 kubelet[2105]: E0715 11:21:50.532282 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.532294 kubelet[2105]: W0715 11:21:50.532293 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.532348 kubelet[2105]: E0715 11:21:50.532301 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.532442 kubelet[2105]: E0715 11:21:50.532431 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.532471 kubelet[2105]: W0715 11:21:50.532442 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.532471 kubelet[2105]: E0715 11:21:50.532450 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.532655 kubelet[2105]: E0715 11:21:50.532629 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.532655 kubelet[2105]: W0715 11:21:50.532642 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.532655 kubelet[2105]: E0715 11:21:50.532653 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.532806 kubelet[2105]: E0715 11:21:50.532794 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.532831 kubelet[2105]: W0715 11:21:50.532806 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.532875 kubelet[2105]: E0715 11:21:50.532828 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.533068 kubelet[2105]: E0715 11:21:50.533054 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.533068 kubelet[2105]: W0715 11:21:50.533066 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.533223 kubelet[2105]: E0715 11:21:50.533075 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.533327 kubelet[2105]: E0715 11:21:50.533310 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.533327 kubelet[2105]: W0715 11:21:50.533324 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.533374 kubelet[2105]: E0715 11:21:50.533346 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.533535 kubelet[2105]: E0715 11:21:50.533521 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.533535 kubelet[2105]: W0715 11:21:50.533534 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.533596 kubelet[2105]: E0715 11:21:50.533543 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.533789 kubelet[2105]: E0715 11:21:50.533776 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.533789 kubelet[2105]: W0715 11:21:50.533788 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.533862 kubelet[2105]: E0715 11:21:50.533797 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.535023 kubelet[2105]: E0715 11:21:50.534095 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.535023 kubelet[2105]: W0715 11:21:50.534110 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.535023 kubelet[2105]: E0715 11:21:50.534121 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.535023 kubelet[2105]: E0715 11:21:50.534439 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.535023 kubelet[2105]: W0715 11:21:50.534451 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.535023 kubelet[2105]: E0715 11:21:50.534463 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.535374 kubelet[2105]: E0715 11:21:50.535354 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.535419 kubelet[2105]: W0715 11:21:50.535393 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.535419 kubelet[2105]: E0715 11:21:50.535406 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.535592 kubelet[2105]: E0715 11:21:50.535578 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.535592 kubelet[2105]: W0715 11:21:50.535591 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.535650 kubelet[2105]: E0715 11:21:50.535599 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.540033 env[1322]: time="2025-07-15T11:21:50.539911965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5887c4c59c-bqlpw,Uid:cfa5fe19-8cfa-4627-a795-f77d218ad3b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb5e333d36100c296ccdc81d2958095401da0c1c3deb9db2850790c802efef90\"" Jul 15 11:21:50.540584 kubelet[2105]: E0715 11:21:50.540560 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:50.542192 env[1322]: time="2025-07-15T11:21:50.542161453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 11:21:50.561778 kubelet[2105]: E0715 11:21:50.561733 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.561778 kubelet[2105]: W0715 11:21:50.561752 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.561778 kubelet[2105]: E0715 11:21:50.561767 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.561937 kubelet[2105]: I0715 11:21:50.561794 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5caaf704-0a5d-4b3c-abd2-5b536ffec524-kubelet-dir\") pod \"csi-node-driver-h54b2\" (UID: \"5caaf704-0a5d-4b3c-abd2-5b536ffec524\") " pod="calico-system/csi-node-driver-h54b2" Jul 15 11:21:50.562028 kubelet[2105]: E0715 11:21:50.561997 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.562028 kubelet[2105]: W0715 11:21:50.562012 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.562028 kubelet[2105]: E0715 11:21:50.562027 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.562106 kubelet[2105]: I0715 11:21:50.562045 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5caaf704-0a5d-4b3c-abd2-5b536ffec524-varrun\") pod \"csi-node-driver-h54b2\" (UID: \"5caaf704-0a5d-4b3c-abd2-5b536ffec524\") " pod="calico-system/csi-node-driver-h54b2" Jul 15 11:21:50.562265 kubelet[2105]: E0715 11:21:50.562242 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.562265 kubelet[2105]: W0715 11:21:50.562257 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.562324 kubelet[2105]: E0715 11:21:50.562271 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.562324 kubelet[2105]: I0715 11:21:50.562287 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhld2\" (UniqueName: \"kubernetes.io/projected/5caaf704-0a5d-4b3c-abd2-5b536ffec524-kube-api-access-lhld2\") pod \"csi-node-driver-h54b2\" (UID: \"5caaf704-0a5d-4b3c-abd2-5b536ffec524\") " pod="calico-system/csi-node-driver-h54b2" Jul 15 11:21:50.562529 kubelet[2105]: E0715 11:21:50.562507 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.562529 kubelet[2105]: W0715 11:21:50.562522 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.562577 kubelet[2105]: E0715 11:21:50.562536 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.562577 kubelet[2105]: I0715 11:21:50.562553 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5caaf704-0a5d-4b3c-abd2-5b536ffec524-socket-dir\") pod \"csi-node-driver-h54b2\" (UID: \"5caaf704-0a5d-4b3c-abd2-5b536ffec524\") " pod="calico-system/csi-node-driver-h54b2" Jul 15 11:21:50.562757 kubelet[2105]: E0715 11:21:50.562737 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.562757 kubelet[2105]: W0715 11:21:50.562750 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.562807 kubelet[2105]: E0715 11:21:50.562764 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.562807 kubelet[2105]: I0715 11:21:50.562779 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5caaf704-0a5d-4b3c-abd2-5b536ffec524-registration-dir\") pod \"csi-node-driver-h54b2\" (UID: \"5caaf704-0a5d-4b3c-abd2-5b536ffec524\") " pod="calico-system/csi-node-driver-h54b2" Jul 15 11:21:50.563789 kubelet[2105]: E0715 11:21:50.563766 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.563789 kubelet[2105]: W0715 11:21:50.563783 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.563858 kubelet[2105]: E0715 11:21:50.563800 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.564924 kubelet[2105]: E0715 11:21:50.564899 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.564959 kubelet[2105]: W0715 11:21:50.564918 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.565037 kubelet[2105]: E0715 11:21:50.565002 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.565218 kubelet[2105]: E0715 11:21:50.565192 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.565218 kubelet[2105]: W0715 11:21:50.565207 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.565291 kubelet[2105]: E0715 11:21:50.565230 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.565445 kubelet[2105]: E0715 11:21:50.565419 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.565445 kubelet[2105]: W0715 11:21:50.565433 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.565548 kubelet[2105]: E0715 11:21:50.565530 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.565759 kubelet[2105]: E0715 11:21:50.565721 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.565759 kubelet[2105]: W0715 11:21:50.565748 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.565832 kubelet[2105]: E0715 11:21:50.565778 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.565999 kubelet[2105]: E0715 11:21:50.565985 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.566027 kubelet[2105]: W0715 11:21:50.565999 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.566057 kubelet[2105]: E0715 11:21:50.566038 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.566315 kubelet[2105]: E0715 11:21:50.566299 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.566354 kubelet[2105]: W0715 11:21:50.566315 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.566354 kubelet[2105]: E0715 11:21:50.566327 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.566570 kubelet[2105]: E0715 11:21:50.566547 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.566570 kubelet[2105]: W0715 11:21:50.566565 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.566639 kubelet[2105]: E0715 11:21:50.566593 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.566811 kubelet[2105]: E0715 11:21:50.566797 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.566865 kubelet[2105]: W0715 11:21:50.566810 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.566865 kubelet[2105]: E0715 11:21:50.566831 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.567050 kubelet[2105]: E0715 11:21:50.567037 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.567050 kubelet[2105]: W0715 11:21:50.567049 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.567111 kubelet[2105]: E0715 11:21:50.567058 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.601196 env[1322]: time="2025-07-15T11:21:50.600865955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-982vn,Uid:aea436da-44e4-41fe-803f-15e22a5a8c67,Namespace:calico-system,Attempt:0,}" Jul 15 11:21:50.615032 env[1322]: time="2025-07-15T11:21:50.614958297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:21:50.615128 env[1322]: time="2025-07-15T11:21:50.615038824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:21:50.615128 env[1322]: time="2025-07-15T11:21:50.615066227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:21:50.615356 env[1322]: time="2025-07-15T11:21:50.615310969Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b20f49ecbfd1e4c61449d3a2aa88e0bd2c8d657214bb660879e789b5f6f0a8a pid=2653 runtime=io.containerd.runc.v2 Jul 15 11:21:50.664126 kubelet[2105]: E0715 11:21:50.663470 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.664126 kubelet[2105]: W0715 11:21:50.663491 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.664126 kubelet[2105]: E0715 11:21:50.663509 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.664126 kubelet[2105]: E0715 11:21:50.663867 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.664126 kubelet[2105]: W0715 11:21:50.663879 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.664126 kubelet[2105]: E0715 11:21:50.663905 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.664126 kubelet[2105]: E0715 11:21:50.664115 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.664126 kubelet[2105]: W0715 11:21:50.664130 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.664487 kubelet[2105]: E0715 11:21:50.664147 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.666855 kubelet[2105]: E0715 11:21:50.664970 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.666855 kubelet[2105]: W0715 11:21:50.664992 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.666855 kubelet[2105]: E0715 11:21:50.665013 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.666855 kubelet[2105]: E0715 11:21:50.665233 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.666855 kubelet[2105]: W0715 11:21:50.665242 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.666855 kubelet[2105]: E0715 11:21:50.665279 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.666855 kubelet[2105]: E0715 11:21:50.665425 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.666855 kubelet[2105]: W0715 11:21:50.665434 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.666855 kubelet[2105]: E0715 11:21:50.665490 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.666855 kubelet[2105]: E0715 11:21:50.665654 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.667138 kubelet[2105]: W0715 11:21:50.665662 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.667138 kubelet[2105]: E0715 11:21:50.665701 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.667138 kubelet[2105]: E0715 11:21:50.665851 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.667138 kubelet[2105]: W0715 11:21:50.665859 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.667138 kubelet[2105]: E0715 11:21:50.665903 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.667138 kubelet[2105]: E0715 11:21:50.666045 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.667138 kubelet[2105]: W0715 11:21:50.666053 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.667138 kubelet[2105]: E0715 11:21:50.666069 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.667138 kubelet[2105]: E0715 11:21:50.666240 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.667138 kubelet[2105]: W0715 11:21:50.666251 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.667363 kubelet[2105]: E0715 11:21:50.666267 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.667363 kubelet[2105]: E0715 11:21:50.666430 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.667363 kubelet[2105]: W0715 11:21:50.666438 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.667363 kubelet[2105]: E0715 11:21:50.666453 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.667363 kubelet[2105]: E0715 11:21:50.666666 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.667363 kubelet[2105]: W0715 11:21:50.666675 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.667363 kubelet[2105]: E0715 11:21:50.666700 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.667363 kubelet[2105]: E0715 11:21:50.666919 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.667363 kubelet[2105]: W0715 11:21:50.666930 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.667363 kubelet[2105]: E0715 11:21:50.666951 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.667574 kubelet[2105]: E0715 11:21:50.667110 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.667574 kubelet[2105]: W0715 11:21:50.667118 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.667574 kubelet[2105]: E0715 11:21:50.667138 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.667574 kubelet[2105]: E0715 11:21:50.667294 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.667574 kubelet[2105]: W0715 11:21:50.667302 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.667574 kubelet[2105]: E0715 11:21:50.667321 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.667574 kubelet[2105]: E0715 11:21:50.667456 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.667574 kubelet[2105]: W0715 11:21:50.667467 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.667574 kubelet[2105]: E0715 11:21:50.667487 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.667778 kubelet[2105]: E0715 11:21:50.667628 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.667778 kubelet[2105]: W0715 11:21:50.667636 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.667778 kubelet[2105]: E0715 11:21:50.667649 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.667860 kubelet[2105]: E0715 11:21:50.667793 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.667860 kubelet[2105]: W0715 11:21:50.667801 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.667860 kubelet[2105]: E0715 11:21:50.667816 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.669926 kubelet[2105]: E0715 11:21:50.669815 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.669926 kubelet[2105]: W0715 11:21:50.669828 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.670022 kubelet[2105]: E0715 11:21:50.669943 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.670413 kubelet[2105]: E0715 11:21:50.670388 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.670481 kubelet[2105]: W0715 11:21:50.670440 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.670555 kubelet[2105]: E0715 11:21:50.670531 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.670780 kubelet[2105]: E0715 11:21:50.670754 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.670780 kubelet[2105]: W0715 11:21:50.670769 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.670989 kubelet[2105]: E0715 11:21:50.670962 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.671118 kubelet[2105]: E0715 11:21:50.671099 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.671118 kubelet[2105]: W0715 11:21:50.671113 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.671236 kubelet[2105]: E0715 11:21:50.671218 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.671414 kubelet[2105]: E0715 11:21:50.671395 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.671414 kubelet[2105]: W0715 11:21:50.671410 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.671505 kubelet[2105]: E0715 11:21:50.671435 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.671624 kubelet[2105]: E0715 11:21:50.671605 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.671624 kubelet[2105]: W0715 11:21:50.671619 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.671694 kubelet[2105]: E0715 11:21:50.671632 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.671986 kubelet[2105]: E0715 11:21:50.671963 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.671986 kubelet[2105]: W0715 11:21:50.671979 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.672088 kubelet[2105]: E0715 11:21:50.671991 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.682701 kubelet[2105]: E0715 11:21:50.682632 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:50.682701 kubelet[2105]: W0715 11:21:50.682650 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:50.682701 kubelet[2105]: E0715 11:21:50.682665 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:50.719249 env[1322]: time="2025-07-15T11:21:50.719204166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-982vn,Uid:aea436da-44e4-41fe-803f-15e22a5a8c67,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b20f49ecbfd1e4c61449d3a2aa88e0bd2c8d657214bb660879e789b5f6f0a8a\"" Jul 15 11:21:51.200000 audit[2716]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2716 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:21:51.200000 audit[2716]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffffe8803a0 a2=0 a3=1 items=0 ppid=2215 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:51.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:21:51.209000 audit[2716]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2716 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:21:51.209000 audit[2716]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe8803a0 a2=0 a3=1 items=0 ppid=2215 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:21:51.209000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:21:51.407024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount412003678.mount: Deactivated successfully. Jul 15 11:21:52.471200 env[1322]: time="2025-07-15T11:21:52.471152423Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:52.472572 env[1322]: time="2025-07-15T11:21:52.472536299Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:52.474179 env[1322]: time="2025-07-15T11:21:52.474150994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:52.475380 env[1322]: time="2025-07-15T11:21:52.475350775Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:52.475781 env[1322]: time="2025-07-15T11:21:52.475750769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 15 11:21:52.480122 env[1322]: time="2025-07-15T11:21:52.479688459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 11:21:52.494805 env[1322]: time="2025-07-15T11:21:52.494766843Z" level=info msg="CreateContainer within sandbox \"bb5e333d36100c296ccdc81d2958095401da0c1c3deb9db2850790c802efef90\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 11:21:52.506419 env[1322]: time="2025-07-15T11:21:52.506372615Z" level=info msg="CreateContainer within sandbox \"bb5e333d36100c296ccdc81d2958095401da0c1c3deb9db2850790c802efef90\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c9229791bc4595c3ce03eca1ec55634bdc82729ce9f0d45a3004ffcb2f0d469c\"" Jul 15 11:21:52.507764 env[1322]: time="2025-07-15T11:21:52.507002788Z" level=info msg="StartContainer for \"c9229791bc4595c3ce03eca1ec55634bdc82729ce9f0d45a3004ffcb2f0d469c\"" Jul 15 11:21:52.580976 env[1322]: time="2025-07-15T11:21:52.580933785Z" level=info msg="StartContainer for \"c9229791bc4595c3ce03eca1ec55634bdc82729ce9f0d45a3004ffcb2f0d469c\" returns successfully" Jul 15 11:21:52.650626 kubelet[2105]: E0715 11:21:52.650590 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h54b2" podUID="5caaf704-0a5d-4b3c-abd2-5b536ffec524" Jul 15 11:21:52.711739 kubelet[2105]: E0715 11:21:52.711707 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:52.750945 kubelet[2105]: E0715 11:21:52.750833 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.750945 kubelet[2105]: W0715 11:21:52.750869 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.750945 kubelet[2105]: E0715 11:21:52.750889 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.751565 kubelet[2105]: E0715 11:21:52.751529 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.751565 kubelet[2105]: W0715 11:21:52.751545 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.751565 kubelet[2105]: E0715 11:21:52.751557 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.751740 kubelet[2105]: E0715 11:21:52.751720 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.751740 kubelet[2105]: W0715 11:21:52.751733 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.751811 kubelet[2105]: E0715 11:21:52.751744 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.757973 kubelet[2105]: E0715 11:21:52.757937 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.757973 kubelet[2105]: W0715 11:21:52.757958 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.757973 kubelet[2105]: E0715 11:21:52.757977 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.758224 kubelet[2105]: E0715 11:21:52.758198 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.758224 kubelet[2105]: W0715 11:21:52.758211 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.758224 kubelet[2105]: E0715 11:21:52.758221 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.758407 kubelet[2105]: E0715 11:21:52.758383 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.758407 kubelet[2105]: W0715 11:21:52.758397 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.758407 kubelet[2105]: E0715 11:21:52.758407 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.758629 kubelet[2105]: E0715 11:21:52.758602 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.758629 kubelet[2105]: W0715 11:21:52.758618 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.758629 kubelet[2105]: E0715 11:21:52.758627 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.759965 kubelet[2105]: E0715 11:21:52.759943 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.760060 kubelet[2105]: W0715 11:21:52.760046 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.760136 kubelet[2105]: E0715 11:21:52.760123 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.760406 kubelet[2105]: E0715 11:21:52.760390 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.760488 kubelet[2105]: W0715 11:21:52.760475 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.760554 kubelet[2105]: E0715 11:21:52.760542 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.761118 kubelet[2105]: E0715 11:21:52.761104 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.761222 kubelet[2105]: W0715 11:21:52.761207 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.761281 kubelet[2105]: E0715 11:21:52.761270 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.761501 kubelet[2105]: E0715 11:21:52.761488 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.761583 kubelet[2105]: W0715 11:21:52.761569 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.761640 kubelet[2105]: E0715 11:21:52.761629 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.761915 kubelet[2105]: E0715 11:21:52.761902 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.762509 kubelet[2105]: W0715 11:21:52.762483 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.762600 kubelet[2105]: E0715 11:21:52.762587 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.762857 kubelet[2105]: E0715 11:21:52.762828 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.762982 kubelet[2105]: W0715 11:21:52.762945 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.763063 kubelet[2105]: E0715 11:21:52.763050 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.763355 kubelet[2105]: E0715 11:21:52.763340 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.763428 kubelet[2105]: W0715 11:21:52.763416 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.763493 kubelet[2105]: E0715 11:21:52.763480 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.763702 kubelet[2105]: E0715 11:21:52.763689 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.763781 kubelet[2105]: W0715 11:21:52.763768 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.763837 kubelet[2105]: E0715 11:21:52.763825 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.783992 kubelet[2105]: E0715 11:21:52.783958 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.783992 kubelet[2105]: W0715 11:21:52.783982 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.784144 kubelet[2105]: E0715 11:21:52.784002 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.785601 kubelet[2105]: E0715 11:21:52.785557 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.785601 kubelet[2105]: W0715 11:21:52.785593 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.785601 kubelet[2105]: E0715 11:21:52.785612 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.785903 kubelet[2105]: E0715 11:21:52.785877 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.785903 kubelet[2105]: W0715 11:21:52.785898 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.786030 kubelet[2105]: E0715 11:21:52.786007 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.786105 kubelet[2105]: E0715 11:21:52.786082 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.786105 kubelet[2105]: W0715 11:21:52.786098 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.786193 kubelet[2105]: E0715 11:21:52.786175 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.786290 kubelet[2105]: E0715 11:21:52.786269 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.786290 kubelet[2105]: W0715 11:21:52.786284 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.786290 kubelet[2105]: E0715 11:21:52.786297 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.786501 kubelet[2105]: E0715 11:21:52.786481 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.786501 kubelet[2105]: W0715 11:21:52.786501 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.786574 kubelet[2105]: E0715 11:21:52.786512 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.788042 kubelet[2105]: E0715 11:21:52.788003 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.788042 kubelet[2105]: W0715 11:21:52.788024 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.788042 kubelet[2105]: E0715 11:21:52.788041 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.793006 kubelet[2105]: E0715 11:21:52.792976 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.793006 kubelet[2105]: W0715 11:21:52.792996 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.793168 kubelet[2105]: E0715 11:21:52.793128 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.795970 kubelet[2105]: E0715 11:21:52.795941 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.795970 kubelet[2105]: W0715 11:21:52.795960 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.796112 kubelet[2105]: E0715 11:21:52.796084 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.798346 kubelet[2105]: E0715 11:21:52.798311 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.798346 kubelet[2105]: W0715 11:21:52.798331 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.798493 kubelet[2105]: E0715 11:21:52.798469 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.798738 kubelet[2105]: E0715 11:21:52.798716 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.798738 kubelet[2105]: W0715 11:21:52.798736 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.798867 kubelet[2105]: E0715 11:21:52.798836 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.802994 kubelet[2105]: E0715 11:21:52.802964 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.802994 kubelet[2105]: W0715 11:21:52.802986 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.803117 kubelet[2105]: E0715 11:21:52.803108 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.804969 kubelet[2105]: E0715 11:21:52.804932 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.804969 kubelet[2105]: W0715 11:21:52.804957 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.805079 kubelet[2105]: E0715 11:21:52.804977 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.810379 kubelet[2105]: E0715 11:21:52.810350 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.810473 kubelet[2105]: W0715 11:21:52.810430 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.810599 kubelet[2105]: E0715 11:21:52.810579 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.811622 kubelet[2105]: E0715 11:21:52.811515 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.811622 kubelet[2105]: W0715 11:21:52.811532 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.811622 kubelet[2105]: E0715 11:21:52.811551 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.811828 kubelet[2105]: E0715 11:21:52.811810 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.811828 kubelet[2105]: W0715 11:21:52.811826 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.811921 kubelet[2105]: E0715 11:21:52.811837 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.812332 kubelet[2105]: E0715 11:21:52.812315 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.812447 kubelet[2105]: W0715 11:21:52.812431 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.812523 kubelet[2105]: E0715 11:21:52.812510 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:52.813260 kubelet[2105]: E0715 11:21:52.813243 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:21:52.813360 kubelet[2105]: W0715 11:21:52.813334 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:21:52.813438 kubelet[2105]: E0715 11:21:52.813425 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:21:53.435896 env[1322]: time="2025-07-15T11:21:53.435381698Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:53.436935 env[1322]: time="2025-07-15T11:21:53.436898019Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:53.438313 env[1322]: time="2025-07-15T11:21:53.438288571Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:53.439579 env[1322]: time="2025-07-15T11:21:53.439536190Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:53.440100 env[1322]: time="2025-07-15T11:21:53.440069993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 15 11:21:53.442697 env[1322]: time="2025-07-15T11:21:53.442408060Z" level=info msg="CreateContainer within sandbox \"9b20f49ecbfd1e4c61449d3a2aa88e0bd2c8d657214bb660879e789b5f6f0a8a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 11:21:53.453993 env[1322]: time="2025-07-15T11:21:53.453954183Z" level=info msg="CreateContainer within sandbox \"9b20f49ecbfd1e4c61449d3a2aa88e0bd2c8d657214bb660879e789b5f6f0a8a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ef4d0adc1a033c0f795196183f4f74693a5157c24a68ad28ae0cdd3cba5e4e67\"" Jul 15 11:21:53.454922 env[1322]: time="2025-07-15T11:21:53.454892938Z" level=info msg="StartContainer for \"ef4d0adc1a033c0f795196183f4f74693a5157c24a68ad28ae0cdd3cba5e4e67\"" Jul 15 11:21:53.534334 env[1322]: time="2025-07-15T11:21:53.534290365Z" level=info msg="StartContainer for \"ef4d0adc1a033c0f795196183f4f74693a5157c24a68ad28ae0cdd3cba5e4e67\" returns successfully" Jul 15 11:21:53.576248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef4d0adc1a033c0f795196183f4f74693a5157c24a68ad28ae0cdd3cba5e4e67-rootfs.mount: Deactivated successfully. Jul 15 11:21:53.599373 env[1322]: time="2025-07-15T11:21:53.599326645Z" level=info msg="shim disconnected" id=ef4d0adc1a033c0f795196183f4f74693a5157c24a68ad28ae0cdd3cba5e4e67 Jul 15 11:21:53.599606 env[1322]: time="2025-07-15T11:21:53.599585425Z" level=warning msg="cleaning up after shim disconnected" id=ef4d0adc1a033c0f795196183f4f74693a5157c24a68ad28ae0cdd3cba5e4e67 namespace=k8s.io Jul 15 11:21:53.599665 env[1322]: time="2025-07-15T11:21:53.599652631Z" level=info msg="cleaning up dead shim" Jul 15 11:21:53.606712 env[1322]: time="2025-07-15T11:21:53.606673192Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:21:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2840 runtime=io.containerd.runc.v2\n" Jul 15 11:21:53.716469 kubelet[2105]: I0715 11:21:53.716248 2105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:21:53.719056 kubelet[2105]: E0715 11:21:53.717323 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:53.719156 env[1322]: time="2025-07-15T11:21:53.718463049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 11:21:53.744986 kubelet[2105]: I0715 11:21:53.744899 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5887c4c59c-bqlpw" podStartSLOduration=1.8072582640000001 podStartE2EDuration="3.744882081s" podCreationTimestamp="2025-07-15 11:21:50 +0000 UTC" firstStartedPulling="2025-07-15 11:21:50.541817421 +0000 UTC m=+17.987322596" lastFinishedPulling="2025-07-15 11:21:52.479441238 +0000 UTC m=+19.924946413" observedRunningTime="2025-07-15 11:21:52.747408659 +0000 UTC m=+20.192913834" watchObservedRunningTime="2025-07-15 11:21:53.744882081 +0000 UTC m=+21.190387256" Jul 15 11:21:54.649958 kubelet[2105]: E0715 11:21:54.649353 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h54b2" podUID="5caaf704-0a5d-4b3c-abd2-5b536ffec524" Jul 15 11:21:55.854109 env[1322]: time="2025-07-15T11:21:55.854063495Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:55.855316 env[1322]: time="2025-07-15T11:21:55.855268463Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:55.856721 env[1322]: time="2025-07-15T11:21:55.856698727Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:55.857923 env[1322]: time="2025-07-15T11:21:55.857887974Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:21:55.858541 env[1322]: time="2025-07-15T11:21:55.858508299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 15 11:21:55.862124 env[1322]: time="2025-07-15T11:21:55.862089040Z" level=info msg="CreateContainer within sandbox \"9b20f49ecbfd1e4c61449d3a2aa88e0bd2c8d657214bb660879e789b5f6f0a8a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 11:21:55.879451 env[1322]: time="2025-07-15T11:21:55.879412423Z" level=info msg="CreateContainer within sandbox \"9b20f49ecbfd1e4c61449d3a2aa88e0bd2c8d657214bb660879e789b5f6f0a8a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4c45b3b16959deecadc85cd1f5cc27b3ae68466929baf264ca74e08a5d6a1b4e\"" Jul 15 11:21:55.882695 env[1322]: time="2025-07-15T11:21:55.880946455Z" level=info msg="StartContainer for \"4c45b3b16959deecadc85cd1f5cc27b3ae68466929baf264ca74e08a5d6a1b4e\"" Jul 15 11:21:55.951698 env[1322]: time="2025-07-15T11:21:55.951652209Z" level=info msg="StartContainer for \"4c45b3b16959deecadc85cd1f5cc27b3ae68466929baf264ca74e08a5d6a1b4e\" returns successfully" Jul 15 11:21:56.606675 env[1322]: time="2025-07-15T11:21:56.606605421Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:21:56.617139 kubelet[2105]: I0715 11:21:56.616930 2105 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 11:21:56.626646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c45b3b16959deecadc85cd1f5cc27b3ae68466929baf264ca74e08a5d6a1b4e-rootfs.mount: Deactivated successfully. Jul 15 11:21:56.632595 env[1322]: time="2025-07-15T11:21:56.632554910Z" level=info msg="shim disconnected" id=4c45b3b16959deecadc85cd1f5cc27b3ae68466929baf264ca74e08a5d6a1b4e Jul 15 11:21:56.632748 env[1322]: time="2025-07-15T11:21:56.632728882Z" level=warning msg="cleaning up after shim disconnected" id=4c45b3b16959deecadc85cd1f5cc27b3ae68466929baf264ca74e08a5d6a1b4e namespace=k8s.io Jul 15 11:21:56.632815 env[1322]: time="2025-07-15T11:21:56.632802088Z" level=info msg="cleaning up dead shim" Jul 15 11:21:56.644083 env[1322]: time="2025-07-15T11:21:56.644049952Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:21:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2913 runtime=io.containerd.runc.v2\n" Jul 15 11:21:56.654432 env[1322]: time="2025-07-15T11:21:56.654245462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h54b2,Uid:5caaf704-0a5d-4b3c-abd2-5b536ffec524,Namespace:calico-system,Attempt:0,}" Jul 15 11:21:56.714698 kubelet[2105]: I0715 11:21:56.712294 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3a26098c-3a6d-4086-9f52-c996f6c18545-whisker-backend-key-pair\") pod \"whisker-6d7f6f864f-8v7kf\" (UID: \"3a26098c-3a6d-4086-9f52-c996f6c18545\") " pod="calico-system/whisker-6d7f6f864f-8v7kf" Jul 15 11:21:56.714698 kubelet[2105]: I0715 11:21:56.712343 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a26098c-3a6d-4086-9f52-c996f6c18545-whisker-ca-bundle\") pod \"whisker-6d7f6f864f-8v7kf\" (UID: \"3a26098c-3a6d-4086-9f52-c996f6c18545\") " pod="calico-system/whisker-6d7f6f864f-8v7kf" Jul 15 11:21:56.714698 kubelet[2105]: I0715 11:21:56.712360 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2v47\" (UniqueName: \"kubernetes.io/projected/3a26098c-3a6d-4086-9f52-c996f6c18545-kube-api-access-d2v47\") pod \"whisker-6d7f6f864f-8v7kf\" (UID: \"3a26098c-3a6d-4086-9f52-c996f6c18545\") " pod="calico-system/whisker-6d7f6f864f-8v7kf" Jul 15 11:21:56.714698 kubelet[2105]: I0715 11:21:56.712383 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8b304b2-34d6-4422-9aa6-042595bfafa7-tigera-ca-bundle\") pod \"calico-kube-controllers-5744454759-sfdbw\" (UID: \"e8b304b2-34d6-4422-9aa6-042595bfafa7\") " pod="calico-system/calico-kube-controllers-5744454759-sfdbw" Jul 15 11:21:56.714698 kubelet[2105]: I0715 11:21:56.712402 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb2b16b0-5f08-47fc-9227-ffb2cce80eb6-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-fnwj2\" (UID: \"bb2b16b0-5f08-47fc-9227-ffb2cce80eb6\") " pod="calico-system/goldmane-58fd7646b9-fnwj2" Jul 15 11:21:56.715113 kubelet[2105]: I0715 11:21:56.712427 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvrcr\" (UniqueName: \"kubernetes.io/projected/56b3e9f3-a41a-497f-bf77-8ab0f2093996-kube-api-access-vvrcr\") pod \"calico-apiserver-64cf58f847-st89j\" (UID: \"56b3e9f3-a41a-497f-bf77-8ab0f2093996\") " pod="calico-apiserver/calico-apiserver-64cf58f847-st89j" Jul 15 11:21:56.715113 kubelet[2105]: I0715 11:21:56.712446 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bee128a-ec69-4c1c-9486-dda7cdd5da8f-config-volume\") pod \"coredns-7c65d6cfc9-z6zgm\" (UID: \"2bee128a-ec69-4c1c-9486-dda7cdd5da8f\") " pod="kube-system/coredns-7c65d6cfc9-z6zgm" Jul 15 11:21:56.715113 kubelet[2105]: I0715 11:21:56.712476 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb2b16b0-5f08-47fc-9227-ffb2cce80eb6-config\") pod \"goldmane-58fd7646b9-fnwj2\" (UID: \"bb2b16b0-5f08-47fc-9227-ffb2cce80eb6\") " pod="calico-system/goldmane-58fd7646b9-fnwj2" Jul 15 11:21:56.715113 kubelet[2105]: I0715 11:21:56.712513 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bb2b16b0-5f08-47fc-9227-ffb2cce80eb6-goldmane-key-pair\") pod \"goldmane-58fd7646b9-fnwj2\" (UID: \"bb2b16b0-5f08-47fc-9227-ffb2cce80eb6\") " pod="calico-system/goldmane-58fd7646b9-fnwj2" Jul 15 11:21:56.715113 kubelet[2105]: I0715 11:21:56.712528 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/234de6b0-3684-41e7-9d27-ef2f8683df1a-config-volume\") pod \"coredns-7c65d6cfc9-m5sfv\" (UID: \"234de6b0-3684-41e7-9d27-ef2f8683df1a\") " pod="kube-system/coredns-7c65d6cfc9-m5sfv" Jul 15 11:21:56.715402 kubelet[2105]: I0715 11:21:56.712547 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbjt8\" (UniqueName: \"kubernetes.io/projected/bb2b16b0-5f08-47fc-9227-ffb2cce80eb6-kube-api-access-pbjt8\") pod \"goldmane-58fd7646b9-fnwj2\" (UID: \"bb2b16b0-5f08-47fc-9227-ffb2cce80eb6\") " pod="calico-system/goldmane-58fd7646b9-fnwj2" Jul 15 11:21:56.715402 kubelet[2105]: I0715 11:21:56.712563 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmj52\" (UniqueName: \"kubernetes.io/projected/234de6b0-3684-41e7-9d27-ef2f8683df1a-kube-api-access-tmj52\") pod \"coredns-7c65d6cfc9-m5sfv\" (UID: \"234de6b0-3684-41e7-9d27-ef2f8683df1a\") " pod="kube-system/coredns-7c65d6cfc9-m5sfv" Jul 15 11:21:56.715402 kubelet[2105]: I0715 11:21:56.712580 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2r7f\" (UniqueName: \"kubernetes.io/projected/2bee128a-ec69-4c1c-9486-dda7cdd5da8f-kube-api-access-h2r7f\") pod \"coredns-7c65d6cfc9-z6zgm\" (UID: \"2bee128a-ec69-4c1c-9486-dda7cdd5da8f\") " pod="kube-system/coredns-7c65d6cfc9-z6zgm" Jul 15 11:21:56.715402 kubelet[2105]: I0715 11:21:56.712596 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/321abb3c-e37b-40c2-9f34-4c9e458226fa-calico-apiserver-certs\") pod \"calico-apiserver-64cf58f847-7wqks\" (UID: \"321abb3c-e37b-40c2-9f34-4c9e458226fa\") " pod="calico-apiserver/calico-apiserver-64cf58f847-7wqks" Jul 15 11:21:56.715402 kubelet[2105]: I0715 11:21:56.712617 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/56b3e9f3-a41a-497f-bf77-8ab0f2093996-calico-apiserver-certs\") pod \"calico-apiserver-64cf58f847-st89j\" (UID: \"56b3e9f3-a41a-497f-bf77-8ab0f2093996\") " pod="calico-apiserver/calico-apiserver-64cf58f847-st89j" Jul 15 11:21:56.715634 kubelet[2105]: I0715 11:21:56.712642 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxzj9\" (UniqueName: \"kubernetes.io/projected/321abb3c-e37b-40c2-9f34-4c9e458226fa-kube-api-access-gxzj9\") pod \"calico-apiserver-64cf58f847-7wqks\" (UID: \"321abb3c-e37b-40c2-9f34-4c9e458226fa\") " pod="calico-apiserver/calico-apiserver-64cf58f847-7wqks" Jul 15 11:21:56.715634 kubelet[2105]: I0715 11:21:56.712661 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qzg2\" (UniqueName: \"kubernetes.io/projected/e8b304b2-34d6-4422-9aa6-042595bfafa7-kube-api-access-9qzg2\") pod \"calico-kube-controllers-5744454759-sfdbw\" (UID: \"e8b304b2-34d6-4422-9aa6-042595bfafa7\") " pod="calico-system/calico-kube-controllers-5744454759-sfdbw" Jul 15 11:21:56.727608 env[1322]: time="2025-07-15T11:21:56.727567533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 11:21:56.810192 env[1322]: time="2025-07-15T11:21:56.810131489Z" level=error msg="Failed to destroy network for sandbox \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:56.810670 env[1322]: time="2025-07-15T11:21:56.810622403Z" level=error msg="encountered an error cleaning up failed sandbox \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:56.810797 env[1322]: time="2025-07-15T11:21:56.810770453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h54b2,Uid:5caaf704-0a5d-4b3c-abd2-5b536ffec524,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:56.811952 kubelet[2105]: E0715 11:21:56.811905 2105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:56.812127 kubelet[2105]: E0715 11:21:56.812104 2105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h54b2" Jul 15 11:21:56.812207 kubelet[2105]: E0715 11:21:56.812192 2105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h54b2" Jul 15 11:21:56.812327 kubelet[2105]: E0715 11:21:56.812300 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h54b2_calico-system(5caaf704-0a5d-4b3c-abd2-5b536ffec524)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h54b2_calico-system(5caaf704-0a5d-4b3c-abd2-5b536ffec524)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h54b2" podUID="5caaf704-0a5d-4b3c-abd2-5b536ffec524" Jul 15 11:21:56.950329 env[1322]: time="2025-07-15T11:21:56.950141888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64cf58f847-7wqks,Uid:321abb3c-e37b-40c2-9f34-4c9e458226fa,Namespace:calico-apiserver,Attempt:0,}" Jul 15 11:21:56.967092 env[1322]: time="2025-07-15T11:21:56.966463906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d7f6f864f-8v7kf,Uid:3a26098c-3a6d-4086-9f52-c996f6c18545,Namespace:calico-system,Attempt:0,}" Jul 15 11:21:56.980803 env[1322]: time="2025-07-15T11:21:56.977596642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64cf58f847-st89j,Uid:56b3e9f3-a41a-497f-bf77-8ab0f2093996,Namespace:calico-apiserver,Attempt:0,}" Jul 15 11:21:56.980803 env[1322]: time="2025-07-15T11:21:56.978297331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5744454759-sfdbw,Uid:e8b304b2-34d6-4422-9aa6-042595bfafa7,Namespace:calico-system,Attempt:0,}" Jul 15 11:21:56.980979 kubelet[2105]: E0715 11:21:56.979375 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:56.980979 kubelet[2105]: E0715 11:21:56.980042 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:21:56.983970 env[1322]: time="2025-07-15T11:21:56.982132918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m5sfv,Uid:234de6b0-3684-41e7-9d27-ef2f8683df1a,Namespace:kube-system,Attempt:0,}" Jul 15 11:21:56.984319 env[1322]: time="2025-07-15T11:21:56.984284428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z6zgm,Uid:2bee128a-ec69-4c1c-9486-dda7cdd5da8f,Namespace:kube-system,Attempt:0,}" Jul 15 11:21:56.984800 env[1322]: time="2025-07-15T11:21:56.984769022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fnwj2,Uid:bb2b16b0-5f08-47fc-9227-ffb2cce80eb6,Namespace:calico-system,Attempt:0,}" Jul 15 11:21:57.024361 env[1322]: time="2025-07-15T11:21:57.024289667Z" level=error msg="Failed to destroy network for sandbox \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.025207 env[1322]: time="2025-07-15T11:21:57.025167285Z" level=error msg="encountered an error cleaning up failed sandbox \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.025388 env[1322]: time="2025-07-15T11:21:57.025357498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64cf58f847-7wqks,Uid:321abb3c-e37b-40c2-9f34-4c9e458226fa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.025727 kubelet[2105]: E0715 11:21:57.025679 2105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.025826 kubelet[2105]: E0715 11:21:57.025744 2105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64cf58f847-7wqks" Jul 15 11:21:57.025826 kubelet[2105]: E0715 11:21:57.025765 2105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64cf58f847-7wqks" Jul 15 11:21:57.025826 kubelet[2105]: E0715 11:21:57.025804 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64cf58f847-7wqks_calico-apiserver(321abb3c-e37b-40c2-9f34-4c9e458226fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64cf58f847-7wqks_calico-apiserver(321abb3c-e37b-40c2-9f34-4c9e458226fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64cf58f847-7wqks" podUID="321abb3c-e37b-40c2-9f34-4c9e458226fa" Jul 15 11:21:57.060099 env[1322]: time="2025-07-15T11:21:57.060031171Z" level=error msg="Failed to destroy network for sandbox \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.060497 env[1322]: time="2025-07-15T11:21:57.060452719Z" level=error msg="encountered an error cleaning up failed sandbox \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.060560 env[1322]: time="2025-07-15T11:21:57.060507483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d7f6f864f-8v7kf,Uid:3a26098c-3a6d-4086-9f52-c996f6c18545,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.060759 kubelet[2105]: E0715 11:21:57.060717 2105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.060822 kubelet[2105]: E0715 11:21:57.060778 2105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d7f6f864f-8v7kf" Jul 15 11:21:57.060822 kubelet[2105]: E0715 11:21:57.060797 2105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d7f6f864f-8v7kf" Jul 15 11:21:57.060932 kubelet[2105]: E0715 11:21:57.060836 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d7f6f864f-8v7kf_calico-system(3a26098c-3a6d-4086-9f52-c996f6c18545)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d7f6f864f-8v7kf_calico-system(3a26098c-3a6d-4086-9f52-c996f6c18545)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d7f6f864f-8v7kf" podUID="3a26098c-3a6d-4086-9f52-c996f6c18545" Jul 15 11:21:57.099607 env[1322]: time="2025-07-15T11:21:57.099552328Z" level=error msg="Failed to destroy network for sandbox \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.099962 env[1322]: time="2025-07-15T11:21:57.099929753Z" level=error msg="encountered an error cleaning up failed sandbox \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.100024 env[1322]: time="2025-07-15T11:21:57.099979396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5744454759-sfdbw,Uid:e8b304b2-34d6-4422-9aa6-042595bfafa7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.100212 kubelet[2105]: E0715 11:21:57.100176 2105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.100275 kubelet[2105]: E0715 11:21:57.100235 2105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5744454759-sfdbw" Jul 15 11:21:57.100275 kubelet[2105]: E0715 11:21:57.100261 2105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5744454759-sfdbw" Jul 15 11:21:57.100389 kubelet[2105]: E0715 11:21:57.100300 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5744454759-sfdbw_calico-system(e8b304b2-34d6-4422-9aa6-042595bfafa7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5744454759-sfdbw_calico-system(e8b304b2-34d6-4422-9aa6-042595bfafa7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5744454759-sfdbw" podUID="e8b304b2-34d6-4422-9aa6-042595bfafa7" Jul 15 11:21:57.108906 env[1322]: time="2025-07-15T11:21:57.108824066Z" level=error msg="Failed to destroy network for sandbox \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.109231 env[1322]: time="2025-07-15T11:21:57.109191451Z" level=error msg="encountered an error cleaning up failed sandbox \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.109287 env[1322]: time="2025-07-15T11:21:57.109242574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z6zgm,Uid:2bee128a-ec69-4c1c-9486-dda7cdd5da8f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.110326 kubelet[2105]: E0715 11:21:57.110274 2105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.110420 kubelet[2105]: E0715 11:21:57.110363 2105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-z6zgm" Jul 15 11:21:57.110420 kubelet[2105]: E0715 11:21:57.110392 2105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-z6zgm" Jul 15 11:21:57.110473 kubelet[2105]: E0715 11:21:57.110435 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-z6zgm_kube-system(2bee128a-ec69-4c1c-9486-dda7cdd5da8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-z6zgm_kube-system(2bee128a-ec69-4c1c-9486-dda7cdd5da8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-z6zgm" podUID="2bee128a-ec69-4c1c-9486-dda7cdd5da8f" Jul 15 11:21:57.118929 env[1322]: time="2025-07-15T11:21:57.118870976Z" level=error msg="Failed to destroy network for sandbox \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.119423 env[1322]: time="2025-07-15T11:21:57.119366130Z" level=error msg="encountered an error cleaning up failed sandbox \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.119483 env[1322]: time="2025-07-15T11:21:57.119447975Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fnwj2,Uid:bb2b16b0-5f08-47fc-9227-ffb2cce80eb6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.123262 kubelet[2105]: E0715 11:21:57.121276 2105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.123262 kubelet[2105]: E0715 11:21:57.122971 2105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-fnwj2" Jul 15 11:21:57.123262 kubelet[2105]: E0715 11:21:57.122991 2105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-fnwj2" Jul 15 11:21:57.123475 kubelet[2105]: E0715 11:21:57.123032 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-fnwj2_calico-system(bb2b16b0-5f08-47fc-9227-ffb2cce80eb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-fnwj2_calico-system(bb2b16b0-5f08-47fc-9227-ffb2cce80eb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-fnwj2" podUID="bb2b16b0-5f08-47fc-9227-ffb2cce80eb6" Jul 15 11:21:57.123677 env[1322]: time="2025-07-15T11:21:57.123624694Z" level=error msg="Failed to destroy network for sandbox \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.124125 env[1322]: time="2025-07-15T11:21:57.124089565Z" level=error msg="encountered an error cleaning up failed sandbox \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.124326 env[1322]: time="2025-07-15T11:21:57.124149969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m5sfv,Uid:234de6b0-3684-41e7-9d27-ef2f8683df1a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.125175 kubelet[2105]: E0715 11:21:57.124414 2105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.125808 kubelet[2105]: E0715 11:21:57.125183 2105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-m5sfv" Jul 15 11:21:57.125808 kubelet[2105]: E0715 11:21:57.125742 2105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-m5sfv" Jul 15 11:21:57.125991 kubelet[2105]: E0715 11:21:57.125941 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-m5sfv_kube-system(234de6b0-3684-41e7-9d27-ef2f8683df1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-m5sfv_kube-system(234de6b0-3684-41e7-9d27-ef2f8683df1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-m5sfv" podUID="234de6b0-3684-41e7-9d27-ef2f8683df1a" Jul 15 11:21:57.131710 env[1322]: time="2025-07-15T11:21:57.131665910Z" level=error msg="Failed to destroy network for sandbox \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.132481 env[1322]: time="2025-07-15T11:21:57.132292792Z" level=error msg="encountered an error cleaning up failed sandbox \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.132879 env[1322]: time="2025-07-15T11:21:57.132701259Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64cf58f847-st89j,Uid:56b3e9f3-a41a-497f-bf77-8ab0f2093996,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.133384 kubelet[2105]: E0715 11:21:57.133353 2105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.133460 kubelet[2105]: E0715 11:21:57.133397 2105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64cf58f847-st89j" Jul 15 11:21:57.133460 kubelet[2105]: E0715 11:21:57.133414 2105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64cf58f847-st89j" Jul 15 11:21:57.133460 kubelet[2105]: E0715 11:21:57.133444 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64cf58f847-st89j_calico-apiserver(56b3e9f3-a41a-497f-bf77-8ab0f2093996)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64cf58f847-st89j_calico-apiserver(56b3e9f3-a41a-497f-bf77-8ab0f2093996)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64cf58f847-st89j" podUID="56b3e9f3-a41a-497f-bf77-8ab0f2093996" Jul 15 11:21:57.730037 kubelet[2105]: I0715 11:21:57.729998 2105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Jul 15 11:21:57.731674 env[1322]: time="2025-07-15T11:21:57.730863164Z" level=info msg="StopPodSandbox for \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\"" Jul 15 11:21:57.732800 kubelet[2105]: I0715 11:21:57.732673 2105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Jul 15 11:21:57.733747 env[1322]: time="2025-07-15T11:21:57.733701473Z" level=info msg="StopPodSandbox for \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\"" Jul 15 11:21:57.735873 kubelet[2105]: I0715 11:21:57.735762 2105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Jul 15 11:21:57.736857 env[1322]: time="2025-07-15T11:21:57.736809880Z" level=info msg="StopPodSandbox for \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\"" Jul 15 11:21:57.737024 kubelet[2105]: I0715 11:21:57.736965 2105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Jul 15 11:21:57.740813 kubelet[2105]: I0715 11:21:57.739930 2105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Jul 15 11:21:57.740918 env[1322]: time="2025-07-15T11:21:57.740351237Z" level=info msg="StopPodSandbox for \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\"" Jul 15 11:21:57.740918 env[1322]: time="2025-07-15T11:21:57.740678858Z" level=info msg="StopPodSandbox for \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\"" Jul 15 11:21:57.741264 kubelet[2105]: I0715 11:21:57.741193 2105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Jul 15 11:21:57.742155 env[1322]: time="2025-07-15T11:21:57.742061191Z" level=info msg="StopPodSandbox for \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\"" Jul 15 11:21:57.742654 kubelet[2105]: I0715 11:21:57.742623 2105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Jul 15 11:21:57.743491 env[1322]: time="2025-07-15T11:21:57.743458524Z" level=info msg="StopPodSandbox for \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\"" Jul 15 11:21:57.747861 kubelet[2105]: I0715 11:21:57.747774 2105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Jul 15 11:21:57.760217 env[1322]: time="2025-07-15T11:21:57.760172239Z" level=info msg="StopPodSandbox for \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\"" Jul 15 11:21:57.786415 env[1322]: time="2025-07-15T11:21:57.786344225Z" level=error msg="StopPodSandbox for \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\" failed" error="failed to destroy network for sandbox \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.786943 kubelet[2105]: E0715 11:21:57.786894 2105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Jul 15 11:21:57.787046 kubelet[2105]: E0715 11:21:57.786961 2105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e"} Jul 15 11:21:57.787046 kubelet[2105]: E0715 11:21:57.787018 2105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a26098c-3a6d-4086-9f52-c996f6c18545\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:21:57.787191 kubelet[2105]: E0715 11:21:57.787040 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a26098c-3a6d-4086-9f52-c996f6c18545\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d7f6f864f-8v7kf" podUID="3a26098c-3a6d-4086-9f52-c996f6c18545" Jul 15 11:21:57.791769 env[1322]: time="2025-07-15T11:21:57.791718983Z" level=error msg="StopPodSandbox for \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\" failed" error="failed to destroy network for sandbox \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.792225 kubelet[2105]: E0715 11:21:57.792021 2105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Jul 15 11:21:57.792225 kubelet[2105]: E0715 11:21:57.792085 2105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7"} Jul 15 11:21:57.792225 kubelet[2105]: E0715 11:21:57.792117 2105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2bee128a-ec69-4c1c-9486-dda7cdd5da8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:21:57.792225 kubelet[2105]: E0715 11:21:57.792196 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2bee128a-ec69-4c1c-9486-dda7cdd5da8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-z6zgm" podUID="2bee128a-ec69-4c1c-9486-dda7cdd5da8f" Jul 15 11:21:57.792975 env[1322]: time="2025-07-15T11:21:57.791740305Z" level=error msg="StopPodSandbox for \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\" failed" error="failed to destroy network for sandbox \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.793244 kubelet[2105]: E0715 11:21:57.793212 2105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Jul 15 11:21:57.793322 kubelet[2105]: E0715 11:21:57.793249 2105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3"} Jul 15 11:21:57.793322 kubelet[2105]: E0715 11:21:57.793275 2105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"56b3e9f3-a41a-497f-bf77-8ab0f2093996\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:21:57.793322 kubelet[2105]: E0715 11:21:57.793293 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"56b3e9f3-a41a-497f-bf77-8ab0f2093996\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64cf58f847-st89j" podUID="56b3e9f3-a41a-497f-bf77-8ab0f2093996" Jul 15 11:21:57.801537 env[1322]: time="2025-07-15T11:21:57.801493075Z" level=error msg="StopPodSandbox for \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\" failed" error="failed to destroy network for sandbox \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.801829 kubelet[2105]: E0715 11:21:57.801782 2105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Jul 15 11:21:57.801829 kubelet[2105]: E0715 11:21:57.801826 2105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e"} Jul 15 11:21:57.801964 kubelet[2105]: E0715 11:21:57.801872 2105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"321abb3c-e37b-40c2-9f34-4c9e458226fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:21:57.801964 kubelet[2105]: E0715 11:21:57.801897 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"321abb3c-e37b-40c2-9f34-4c9e458226fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64cf58f847-7wqks" podUID="321abb3c-e37b-40c2-9f34-4c9e458226fa" Jul 15 11:21:57.815239 env[1322]: time="2025-07-15T11:21:57.815191709Z" level=error msg="StopPodSandbox for \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\" failed" error="failed to destroy network for sandbox \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.815588 kubelet[2105]: E0715 11:21:57.815438 2105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Jul 15 11:21:57.815588 kubelet[2105]: E0715 11:21:57.815496 2105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e"} Jul 15 11:21:57.815588 kubelet[2105]: E0715 11:21:57.815527 2105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5caaf704-0a5d-4b3c-abd2-5b536ffec524\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:21:57.815588 kubelet[2105]: E0715 11:21:57.815548 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5caaf704-0a5d-4b3c-abd2-5b536ffec524\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h54b2" podUID="5caaf704-0a5d-4b3c-abd2-5b536ffec524" Jul 15 11:21:57.825008 env[1322]: time="2025-07-15T11:21:57.824953681Z" level=error msg="StopPodSandbox for \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\" failed" error="failed to destroy network for sandbox \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.825330 env[1322]: time="2025-07-15T11:21:57.825109091Z" level=error msg="StopPodSandbox for \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\" failed" error="failed to destroy network for sandbox \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.825383 kubelet[2105]: E0715 11:21:57.825182 2105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Jul 15 11:21:57.825383 kubelet[2105]: E0715 11:21:57.825226 2105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1"} Jul 15 11:21:57.825383 kubelet[2105]: E0715 11:21:57.825278 2105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bb2b16b0-5f08-47fc-9227-ffb2cce80eb6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:21:57.825383 kubelet[2105]: E0715 11:21:57.825242 2105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Jul 15 11:21:57.825529 kubelet[2105]: E0715 11:21:57.825298 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bb2b16b0-5f08-47fc-9227-ffb2cce80eb6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-fnwj2" podUID="bb2b16b0-5f08-47fc-9227-ffb2cce80eb6" Jul 15 11:21:57.825529 kubelet[2105]: E0715 11:21:57.825310 2105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85"} Jul 15 11:21:57.825529 kubelet[2105]: E0715 11:21:57.825370 2105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8b304b2-34d6-4422-9aa6-042595bfafa7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:21:57.825529 kubelet[2105]: E0715 11:21:57.825391 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8b304b2-34d6-4422-9aa6-042595bfafa7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5744454759-sfdbw" podUID="e8b304b2-34d6-4422-9aa6-042595bfafa7" Jul 15 11:21:57.831812 env[1322]: time="2025-07-15T11:21:57.831767855Z" level=error msg="StopPodSandbox for \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\" failed" error="failed to destroy network for sandbox \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:21:57.832050 kubelet[2105]: E0715 11:21:57.831984 2105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Jul 15 11:21:57.832103 kubelet[2105]: E0715 11:21:57.832058 2105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b"} Jul 15 11:21:57.832103 kubelet[2105]: E0715 11:21:57.832083 2105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"234de6b0-3684-41e7-9d27-ef2f8683df1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:21:57.832180 kubelet[2105]: E0715 11:21:57.832099 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"234de6b0-3684-41e7-9d27-ef2f8683df1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-m5sfv" podUID="234de6b0-3684-41e7-9d27-ef2f8683df1a" Jul 15 11:21:57.875428 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e-shm.mount: Deactivated successfully. Jul 15 11:21:57.875581 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e-shm.mount: Deactivated successfully. Jul 15 11:22:01.419800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1537914299.mount: Deactivated successfully. Jul 15 11:22:01.695924 env[1322]: time="2025-07-15T11:22:01.695785170Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:01.697341 env[1322]: time="2025-07-15T11:22:01.697305376Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:01.698668 env[1322]: time="2025-07-15T11:22:01.698642731Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:01.700068 env[1322]: time="2025-07-15T11:22:01.700039050Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:01.700617 env[1322]: time="2025-07-15T11:22:01.700587681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 15 11:22:01.715805 env[1322]: time="2025-07-15T11:22:01.715771379Z" level=info msg="CreateContainer within sandbox \"9b20f49ecbfd1e4c61449d3a2aa88e0bd2c8d657214bb660879e789b5f6f0a8a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 11:22:01.738545 env[1322]: time="2025-07-15T11:22:01.738483342Z" level=info msg="CreateContainer within sandbox \"9b20f49ecbfd1e4c61449d3a2aa88e0bd2c8d657214bb660879e789b5f6f0a8a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8f606f26d70f0337e48a4a0f9bf844dce1a95b1fae0faffb0ad0537d859a3e5b\"" Jul 15 11:22:01.740376 env[1322]: time="2025-07-15T11:22:01.739043254Z" level=info msg="StartContainer for \"8f606f26d70f0337e48a4a0f9bf844dce1a95b1fae0faffb0ad0537d859a3e5b\"" Jul 15 11:22:01.843710 env[1322]: time="2025-07-15T11:22:01.843665764Z" level=info msg="StartContainer for \"8f606f26d70f0337e48a4a0f9bf844dce1a95b1fae0faffb0ad0537d859a3e5b\" returns successfully" Jul 15 11:22:02.052970 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 11:22:02.053096 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 11:22:02.161604 env[1322]: time="2025-07-15T11:22:02.161557973Z" level=info msg="StopPodSandbox for \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\"" Jul 15 11:22:02.412102 env[1322]: 2025-07-15 11:22:02.302 [INFO][3412] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Jul 15 11:22:02.412102 env[1322]: 2025-07-15 11:22:02.302 [INFO][3412] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" iface="eth0" netns="/var/run/netns/cni-f9035ca0-e021-654d-3ecb-9b71ee65a0e3" Jul 15 11:22:02.412102 env[1322]: 2025-07-15 11:22:02.303 [INFO][3412] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" iface="eth0" netns="/var/run/netns/cni-f9035ca0-e021-654d-3ecb-9b71ee65a0e3" Jul 15 11:22:02.412102 env[1322]: 2025-07-15 11:22:02.306 [INFO][3412] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" iface="eth0" netns="/var/run/netns/cni-f9035ca0-e021-654d-3ecb-9b71ee65a0e3" Jul 15 11:22:02.412102 env[1322]: 2025-07-15 11:22:02.306 [INFO][3412] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Jul 15 11:22:02.412102 env[1322]: 2025-07-15 11:22:02.306 [INFO][3412] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Jul 15 11:22:02.412102 env[1322]: 2025-07-15 11:22:02.398 [INFO][3421] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" HandleID="k8s-pod-network.fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Workload="localhost-k8s-whisker--6d7f6f864f--8v7kf-eth0" Jul 15 11:22:02.412102 env[1322]: 2025-07-15 11:22:02.398 [INFO][3421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:02.412102 env[1322]: 2025-07-15 11:22:02.398 [INFO][3421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:02.412102 env[1322]: 2025-07-15 11:22:02.407 [WARNING][3421] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" HandleID="k8s-pod-network.fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Workload="localhost-k8s-whisker--6d7f6f864f--8v7kf-eth0" Jul 15 11:22:02.412102 env[1322]: 2025-07-15 11:22:02.407 [INFO][3421] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" HandleID="k8s-pod-network.fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Workload="localhost-k8s-whisker--6d7f6f864f--8v7kf-eth0" Jul 15 11:22:02.412102 env[1322]: 2025-07-15 11:22:02.408 [INFO][3421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:02.412102 env[1322]: 2025-07-15 11:22:02.410 [INFO][3412] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Jul 15 11:22:02.412510 env[1322]: time="2025-07-15T11:22:02.412183307Z" level=info msg="TearDown network for sandbox \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\" successfully" Jul 15 11:22:02.412510 env[1322]: time="2025-07-15T11:22:02.412218029Z" level=info msg="StopPodSandbox for \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\" returns successfully" Jul 15 11:22:02.419205 systemd[1]: run-netns-cni\x2df9035ca0\x2de021\x2d654d\x2d3ecb\x2d9b71ee65a0e3.mount: Deactivated successfully. Jul 15 11:22:02.464355 kubelet[2105]: I0715 11:22:02.464314 2105 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a26098c-3a6d-4086-9f52-c996f6c18545-whisker-ca-bundle\") pod \"3a26098c-3a6d-4086-9f52-c996f6c18545\" (UID: \"3a26098c-3a6d-4086-9f52-c996f6c18545\") " Jul 15 11:22:02.464657 kubelet[2105]: I0715 11:22:02.464359 2105 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2v47\" (UniqueName: \"kubernetes.io/projected/3a26098c-3a6d-4086-9f52-c996f6c18545-kube-api-access-d2v47\") pod \"3a26098c-3a6d-4086-9f52-c996f6c18545\" (UID: \"3a26098c-3a6d-4086-9f52-c996f6c18545\") " Jul 15 11:22:02.464657 kubelet[2105]: I0715 11:22:02.464447 2105 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3a26098c-3a6d-4086-9f52-c996f6c18545-whisker-backend-key-pair\") pod \"3a26098c-3a6d-4086-9f52-c996f6c18545\" (UID: \"3a26098c-3a6d-4086-9f52-c996f6c18545\") " Jul 15 11:22:02.469831 kubelet[2105]: I0715 11:22:02.469799 2105 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a26098c-3a6d-4086-9f52-c996f6c18545-kube-api-access-d2v47" (OuterVolumeSpecName: "kube-api-access-d2v47") pod "3a26098c-3a6d-4086-9f52-c996f6c18545" (UID: "3a26098c-3a6d-4086-9f52-c996f6c18545"). InnerVolumeSpecName "kube-api-access-d2v47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 11:22:02.470315 kubelet[2105]: I0715 11:22:02.470288 2105 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a26098c-3a6d-4086-9f52-c996f6c18545-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3a26098c-3a6d-4086-9f52-c996f6c18545" (UID: "3a26098c-3a6d-4086-9f52-c996f6c18545"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 11:22:02.470459 systemd[1]: var-lib-kubelet-pods-3a26098c\x2d3a6d\x2d4086\x2d9f52\x2dc996f6c18545-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd2v47.mount: Deactivated successfully. Jul 15 11:22:02.472466 kubelet[2105]: I0715 11:22:02.472435 2105 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a26098c-3a6d-4086-9f52-c996f6c18545-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3a26098c-3a6d-4086-9f52-c996f6c18545" (UID: "3a26098c-3a6d-4086-9f52-c996f6c18545"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 11:22:02.473962 systemd[1]: var-lib-kubelet-pods-3a26098c\x2d3a6d\x2d4086\x2d9f52\x2dc996f6c18545-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 11:22:02.564992 kubelet[2105]: I0715 11:22:02.564959 2105 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a26098c-3a6d-4086-9f52-c996f6c18545-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 15 11:22:02.565120 kubelet[2105]: I0715 11:22:02.565107 2105 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2v47\" (UniqueName: \"kubernetes.io/projected/3a26098c-3a6d-4086-9f52-c996f6c18545-kube-api-access-d2v47\") on node \"localhost\" DevicePath \"\"" Jul 15 11:22:02.565183 kubelet[2105]: I0715 11:22:02.565173 2105 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3a26098c-3a6d-4086-9f52-c996f6c18545-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 15 11:22:02.788700 kubelet[2105]: I0715 11:22:02.788640 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-982vn" podStartSLOduration=1.8089825099999999 podStartE2EDuration="12.788622275s" podCreationTimestamp="2025-07-15 11:21:50 +0000 UTC" firstStartedPulling="2025-07-15 11:21:50.721747321 +0000 UTC m=+18.167252456" lastFinishedPulling="2025-07-15 11:22:01.701387046 +0000 UTC m=+29.146892221" observedRunningTime="2025-07-15 11:22:02.778598371 +0000 UTC m=+30.224103546" watchObservedRunningTime="2025-07-15 11:22:02.788622275 +0000 UTC m=+30.234127450" Jul 15 11:22:02.866981 kubelet[2105]: I0715 11:22:02.866927 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b541ad05-5f2f-43ad-bfa9-de3aed2521cb-whisker-ca-bundle\") pod \"whisker-575985fc67-29d9v\" (UID: \"b541ad05-5f2f-43ad-bfa9-de3aed2521cb\") " pod="calico-system/whisker-575985fc67-29d9v" Jul 15 11:22:02.867136 kubelet[2105]: I0715 11:22:02.867058 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b541ad05-5f2f-43ad-bfa9-de3aed2521cb-whisker-backend-key-pair\") pod \"whisker-575985fc67-29d9v\" (UID: \"b541ad05-5f2f-43ad-bfa9-de3aed2521cb\") " pod="calico-system/whisker-575985fc67-29d9v" Jul 15 11:22:02.867136 kubelet[2105]: I0715 11:22:02.867118 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bp6h\" (UniqueName: \"kubernetes.io/projected/b541ad05-5f2f-43ad-bfa9-de3aed2521cb-kube-api-access-7bp6h\") pod \"whisker-575985fc67-29d9v\" (UID: \"b541ad05-5f2f-43ad-bfa9-de3aed2521cb\") " pod="calico-system/whisker-575985fc67-29d9v" Jul 15 11:22:03.118359 env[1322]: time="2025-07-15T11:22:03.118250864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-575985fc67-29d9v,Uid:b541ad05-5f2f-43ad-bfa9-de3aed2521cb,Namespace:calico-system,Attempt:0,}" Jul 15 11:22:03.249246 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 15 11:22:03.249367 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie676d56c916: link becomes ready Jul 15 11:22:03.249779 systemd-networkd[1098]: calie676d56c916: Link UP Jul 15 11:22:03.250644 systemd-networkd[1098]: calie676d56c916: Gained carrier Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.145 [INFO][3444] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.165 [INFO][3444] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--575985fc67--29d9v-eth0 whisker-575985fc67- calico-system b541ad05-5f2f-43ad-bfa9-de3aed2521cb 872 0 2025-07-15 11:22:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:575985fc67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-575985fc67-29d9v eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie676d56c916 [] [] }} ContainerID="1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" Namespace="calico-system" Pod="whisker-575985fc67-29d9v" WorkloadEndpoint="localhost-k8s-whisker--575985fc67--29d9v-" Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.165 [INFO][3444] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" Namespace="calico-system" Pod="whisker-575985fc67-29d9v" WorkloadEndpoint="localhost-k8s-whisker--575985fc67--29d9v-eth0" Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.191 [INFO][3457] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" HandleID="k8s-pod-network.1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" Workload="localhost-k8s-whisker--575985fc67--29d9v-eth0" Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.191 [INFO][3457] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" HandleID="k8s-pod-network.1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" Workload="localhost-k8s-whisker--575985fc67--29d9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-575985fc67-29d9v", "timestamp":"2025-07-15 11:22:03.19141453 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.192 [INFO][3457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.192 [INFO][3457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.192 [INFO][3457] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.206 [INFO][3457] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" host="localhost" Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.214 [INFO][3457] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.218 [INFO][3457] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.219 [INFO][3457] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.221 [INFO][3457] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.221 [INFO][3457] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" host="localhost" Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.223 [INFO][3457] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.226 [INFO][3457] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" host="localhost" Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.235 [INFO][3457] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" host="localhost" Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.235 [INFO][3457] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" host="localhost" Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.235 [INFO][3457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:03.267614 env[1322]: 2025-07-15 11:22:03.235 [INFO][3457] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" HandleID="k8s-pod-network.1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" Workload="localhost-k8s-whisker--575985fc67--29d9v-eth0" Jul 15 11:22:03.268241 env[1322]: 2025-07-15 11:22:03.237 [INFO][3444] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" Namespace="calico-system" Pod="whisker-575985fc67-29d9v" WorkloadEndpoint="localhost-k8s-whisker--575985fc67--29d9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--575985fc67--29d9v-eth0", GenerateName:"whisker-575985fc67-", Namespace:"calico-system", SelfLink:"", UID:"b541ad05-5f2f-43ad-bfa9-de3aed2521cb", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 22, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"575985fc67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-575985fc67-29d9v", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie676d56c916", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:03.268241 env[1322]: 2025-07-15 11:22:03.237 [INFO][3444] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" Namespace="calico-system" Pod="whisker-575985fc67-29d9v" WorkloadEndpoint="localhost-k8s-whisker--575985fc67--29d9v-eth0" Jul 15 11:22:03.268241 env[1322]: 2025-07-15 11:22:03.237 [INFO][3444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie676d56c916 ContainerID="1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" Namespace="calico-system" Pod="whisker-575985fc67-29d9v" WorkloadEndpoint="localhost-k8s-whisker--575985fc67--29d9v-eth0" Jul 15 11:22:03.268241 env[1322]: 2025-07-15 11:22:03.249 [INFO][3444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" Namespace="calico-system" Pod="whisker-575985fc67-29d9v" WorkloadEndpoint="localhost-k8s-whisker--575985fc67--29d9v-eth0" Jul 15 11:22:03.268241 env[1322]: 2025-07-15 11:22:03.251 [INFO][3444] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" Namespace="calico-system" Pod="whisker-575985fc67-29d9v" WorkloadEndpoint="localhost-k8s-whisker--575985fc67--29d9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--575985fc67--29d9v-eth0", GenerateName:"whisker-575985fc67-", Namespace:"calico-system", SelfLink:"", UID:"b541ad05-5f2f-43ad-bfa9-de3aed2521cb", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 22, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"575985fc67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa", Pod:"whisker-575985fc67-29d9v", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie676d56c916", MAC:"d2:1d:7e:5d:af:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:03.268241 env[1322]: 2025-07-15 11:22:03.265 [INFO][3444] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa" Namespace="calico-system" Pod="whisker-575985fc67-29d9v" WorkloadEndpoint="localhost-k8s-whisker--575985fc67--29d9v-eth0" Jul 15 11:22:03.276742 env[1322]: time="2025-07-15T11:22:03.276670587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:22:03.276742 env[1322]: time="2025-07-15T11:22:03.276718150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:22:03.276742 env[1322]: time="2025-07-15T11:22:03.276728750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:22:03.276988 env[1322]: time="2025-07-15T11:22:03.276952762Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa pid=3482 runtime=io.containerd.runc.v2 Jul 15 11:22:03.304511 systemd-resolved[1237]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:22:03.322110 env[1322]: time="2025-07-15T11:22:03.322062921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-575985fc67-29d9v,Uid:b541ad05-5f2f-43ad-bfa9-de3aed2521cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa\"" Jul 15 11:22:03.324059 env[1322]: time="2025-07-15T11:22:03.324029824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 11:22:03.396000 audit[3557]: AVC avc: denied { write } for pid=3557 comm="tee" name="fd" dev="proc" ino=20603 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:22:03.400914 kernel: kauditd_printk_skb: 25 callbacks suppressed Jul 15 11:22:03.400997 kernel: audit: type=1400 audit(1752578523.396:293): avc: denied { write } for pid=3557 comm="tee" name="fd" dev="proc" ino=20603 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:22:03.401038 kernel: audit: type=1300 audit(1752578523.396:293): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd1f1f7e5 a2=241 a3=1b6 items=1 ppid=3525 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.396000 audit[3557]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd1f1f7e5 a2=241 a3=1b6 items=1 ppid=3525 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.396000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 15 11:22:03.404294 kernel: audit: type=1307 audit(1752578523.396:293): cwd="/etc/service/enabled/confd/log" Jul 15 11:22:03.404358 kernel: audit: type=1302 audit(1752578523.396:293): item=0 name="/dev/fd/63" inode=19678 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:22:03.396000 audit: PATH item=0 name="/dev/fd/63" inode=19678 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:22:03.396000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:22:03.411144 kernel: audit: type=1327 audit(1752578523.396:293): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:22:03.402000 audit[3561]: AVC avc: denied { write } for pid=3561 comm="tee" name="fd" dev="proc" ino=20611 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:22:03.416213 kernel: audit: type=1400 audit(1752578523.402:294): avc: denied { write } for pid=3561 comm="tee" name="fd" dev="proc" ino=20611 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:22:03.402000 audit[3561]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc9cc97d6 a2=241 a3=1b6 items=1 ppid=3526 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.428350 kernel: audit: type=1300 audit(1752578523.402:294): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc9cc97d6 a2=241 a3=1b6 items=1 ppid=3526 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.402000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 15 11:22:03.435772 kernel: audit: type=1307 audit(1752578523.402:294): cwd="/etc/service/enabled/node-status-reporter/log" Jul 15 11:22:03.402000 audit: PATH item=0 name="/dev/fd/63" inode=19679 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:22:03.439880 kernel: audit: type=1302 audit(1752578523.402:294): item=0 name="/dev/fd/63" inode=19679 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:22:03.402000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:22:03.443201 kernel: audit: type=1327 audit(1752578523.402:294): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:22:03.406000 audit[3563]: AVC avc: denied { write } for pid=3563 comm="tee" name="fd" dev="proc" ino=19027 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:22:03.406000 audit[3563]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffec677d5 a2=241 a3=1b6 items=1 ppid=3532 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.406000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 15 11:22:03.406000 audit: PATH item=0 name="/dev/fd/63" inode=19024 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:22:03.406000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:22:03.441000 audit[3589]: AVC avc: denied { write } for pid=3589 comm="tee" name="fd" dev="proc" ino=20631 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:22:03.441000 audit[3589]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcfb227e6 a2=241 a3=1b6 items=1 ppid=3523 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.441000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 15 11:22:03.441000 audit: PATH item=0 name="/dev/fd/63" inode=18258 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:22:03.441000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:22:03.455000 audit[3601]: AVC avc: denied { write } for pid=3601 comm="tee" name="fd" dev="proc" ino=19039 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:22:03.455000 audit[3601]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff8eab7e5 a2=241 a3=1b6 items=1 ppid=3533 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.455000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 15 11:22:03.455000 audit: PATH item=0 name="/dev/fd/63" inode=20635 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:22:03.455000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:22:03.463000 audit[3597]: AVC avc: denied { write } for pid=3597 comm="tee" name="fd" dev="proc" ino=19701 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:22:03.463000 audit[3597]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffeca197e7 a2=241 a3=1b6 items=1 ppid=3540 pid=3597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.463000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 15 11:22:03.463000 audit: PATH item=0 name="/dev/fd/63" inode=18259 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:22:03.463000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:22:03.480221 kubelet[2105]: I0715 11:22:03.480159 2105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:22:03.480610 kubelet[2105]: E0715 11:22:03.480556 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:22:03.481000 audit[3604]: AVC avc: denied { write } for pid=3604 comm="tee" name="fd" dev="proc" ino=19047 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:22:03.481000 audit[3604]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc35a17e5 a2=241 a3=1b6 items=1 ppid=3531 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.481000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 15 11:22:03.481000 audit: PATH item=0 name="/dev/fd/63" inode=18260 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:22:03.481000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:22:03.575000 audit[3617]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:03.575000 audit[3617]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffed1c55b0 a2=0 a3=1 items=0 ppid=2215 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.575000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:03.582000 audit[3617]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:03.582000 audit[3617]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffed1c55b0 a2=0 a3=1 items=0 ppid=2215 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.582000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:03.629000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.629000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.629000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.629000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.629000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.629000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.629000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.629000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.629000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.629000 audit: BPF prog-id=10 op=LOAD Jul 15 11:22:03.629000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffebe0abf8 a2=98 a3=ffffebe0abe8 items=0 ppid=3536 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.629000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 15 11:22:03.630000 audit: BPF prog-id=10 op=UNLOAD Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit: BPF prog-id=11 op=LOAD Jul 15 11:22:03.630000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffebe0aaa8 a2=74 a3=95 items=0 ppid=3536 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.630000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 15 11:22:03.630000 audit: BPF prog-id=11 op=UNLOAD Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { bpf } for pid=3627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit: BPF prog-id=12 op=LOAD Jul 15 11:22:03.630000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffebe0aad8 a2=40 a3=ffffebe0ab08 items=0 ppid=3536 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.630000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 15 11:22:03.630000 audit: BPF prog-id=12 op=UNLOAD Jul 15 11:22:03.630000 audit[3627]: AVC avc: denied { perfmon } for pid=3627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.630000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffebe0abf0 a2=50 a3=0 items=0 ppid=3536 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.630000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit: BPF prog-id=13 op=LOAD Jul 15 11:22:03.632000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffcb62588 a2=98 a3=fffffcb62578 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.632000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.632000 audit: BPF prog-id=13 op=UNLOAD Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit: BPF prog-id=14 op=LOAD Jul 15 11:22:03.632000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffcb62218 a2=74 a3=95 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.632000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.632000 audit: BPF prog-id=14 op=UNLOAD Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.632000 audit: BPF prog-id=15 op=LOAD Jul 15 11:22:03.632000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffcb62278 a2=94 a3=2 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.632000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.632000 audit: BPF prog-id=15 op=UNLOAD Jul 15 11:22:03.727000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.727000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.727000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.727000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.727000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.727000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.727000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.727000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.727000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.727000 audit: BPF prog-id=16 op=LOAD Jul 15 11:22:03.727000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffcb62238 a2=40 a3=fffffcb62268 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.727000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.728000 audit: BPF prog-id=16 op=UNLOAD Jul 15 11:22:03.728000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.728000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=fffffcb62350 a2=50 a3=0 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.728000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.737000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.737000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffcb622a8 a2=28 a3=fffffcb623d8 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.737000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.737000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.737000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffcb622d8 a2=28 a3=fffffcb62408 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.737000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.737000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.737000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffcb62188 a2=28 a3=fffffcb622b8 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.737000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.737000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.737000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffcb622f8 a2=28 a3=fffffcb62428 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.737000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.738000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.738000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffcb622d8 a2=28 a3=fffffcb62408 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.738000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.738000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.738000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffcb622c8 a2=28 a3=fffffcb623f8 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.738000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.738000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.738000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffcb622f8 a2=28 a3=fffffcb62428 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.738000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.738000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.738000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffcb622d8 a2=28 a3=fffffcb62408 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.738000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.738000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.738000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffcb622f8 a2=28 a3=fffffcb62428 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.738000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.738000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.738000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffcb622c8 a2=28 a3=fffffcb623f8 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.738000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.739000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.739000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffcb62348 a2=28 a3=fffffcb62488 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.739000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.739000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.739000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffffcb62080 a2=50 a3=0 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.739000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.739000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.739000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.739000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.739000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.739000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.739000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.739000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.739000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.739000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.739000 audit: BPF prog-id=17 op=LOAD Jul 15 11:22:03.739000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffcb62088 a2=94 a3=5 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.739000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.740000 audit: BPF prog-id=17 op=UNLOAD Jul 15 11:22:03.740000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.740000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffffcb62190 a2=50 a3=0 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.740000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.740000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.740000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=fffffcb622d8 a2=4 a3=3 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.740000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.740000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.740000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.740000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.740000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.740000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.740000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.740000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.740000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.740000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.740000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.740000 audit[3628]: AVC avc: denied { confidentiality } for pid=3628 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:22:03.740000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffffcb622b8 a2=94 a3=6 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.740000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.741000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.741000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.741000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.741000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.741000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.741000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.741000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.741000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.741000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.741000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.741000 audit[3628]: AVC avc: denied { confidentiality } for pid=3628 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:22:03.741000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffffcb61a88 a2=94 a3=83 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.741000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.742000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.742000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.742000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.742000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.742000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.742000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.742000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.742000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.742000 audit[3628]: AVC avc: denied { perfmon } for pid=3628 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.742000 audit[3628]: AVC avc: denied { bpf } for pid=3628 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.742000 audit[3628]: AVC avc: denied { confidentiality } for pid=3628 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:22:03.742000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffffcb61a88 a2=94 a3=83 items=0 ppid=3536 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:22:03.752000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.752000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.752000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.752000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.752000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.752000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.752000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.752000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.752000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.752000 audit: BPF prog-id=18 op=LOAD Jul 15 11:22:03.752000 audit[3650]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff3c09ee8 a2=98 a3=fffff3c09ed8 items=0 ppid=3536 pid=3650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.752000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 15 11:22:03.753000 audit: BPF prog-id=18 op=UNLOAD Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit: BPF prog-id=19 op=LOAD Jul 15 11:22:03.753000 audit[3650]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff3c09d98 a2=74 a3=95 items=0 ppid=3536 pid=3650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.753000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 15 11:22:03.753000 audit: BPF prog-id=19 op=UNLOAD Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.753000 audit: BPF prog-id=20 op=LOAD Jul 15 11:22:03.753000 audit[3650]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff3c09dc8 a2=40 a3=fffff3c09df8 items=0 ppid=3536 pid=3650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.753000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 15 11:22:03.754000 audit: BPF prog-id=20 op=UNLOAD Jul 15 11:22:03.765464 kubelet[2105]: E0715 11:22:03.765423 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:22:03.830318 systemd-networkd[1098]: vxlan.calico: Link UP Jul 15 11:22:03.830323 systemd-networkd[1098]: vxlan.calico: Gained carrier Jul 15 11:22:03.845000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.845000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.845000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.845000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.845000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.845000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.845000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.845000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.845000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.845000 audit: BPF prog-id=21 op=LOAD Jul 15 11:22:03.845000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff33396d8 a2=98 a3=fffff33396c8 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.845000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.846000 audit: BPF prog-id=21 op=UNLOAD Jul 15 11:22:03.846000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.846000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.846000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.846000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.846000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.846000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.846000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.846000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.846000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.846000 audit: BPF prog-id=22 op=LOAD Jul 15 11:22:03.846000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff33393b8 a2=74 a3=95 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.846000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.847000 audit: BPF prog-id=22 op=UNLOAD Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit: BPF prog-id=23 op=LOAD Jul 15 11:22:03.847000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff3339418 a2=94 a3=2 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.847000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.847000 audit: BPF prog-id=23 op=UNLOAD Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff3339448 a2=28 a3=fffff3339578 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.847000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff3339478 a2=28 a3=fffff33395a8 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.847000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff3339328 a2=28 a3=fffff3339458 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.847000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff3339498 a2=28 a3=fffff33395c8 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.847000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff3339478 a2=28 a3=fffff33395a8 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.847000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff3339468 a2=28 a3=fffff3339598 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.847000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff3339498 a2=28 a3=fffff33395c8 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.847000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff3339478 a2=28 a3=fffff33395a8 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.847000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff3339498 a2=28 a3=fffff33395c8 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.847000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff3339468 a2=28 a3=fffff3339598 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.847000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff33394e8 a2=28 a3=fffff3339628 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.847000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit: BPF prog-id=24 op=LOAD Jul 15 11:22:03.847000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff3339308 a2=40 a3=fffff3339338 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.847000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.847000 audit: BPF prog-id=24 op=UNLOAD Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=fffff3339330 a2=50 a3=0 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.847000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=fffff3339330 a2=50 a3=0 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.847000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit: BPF prog-id=25 op=LOAD Jul 15 11:22:03.847000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff3338a98 a2=94 a3=2 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.847000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.847000 audit: BPF prog-id=25 op=UNLOAD Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.847000 audit: BPF prog-id=26 op=LOAD Jul 15 11:22:03.847000 audit[3694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff3338c28 a2=94 a3=30 items=0 ppid=3536 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.847000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:22:03.856000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.856000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.856000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.856000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.856000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.856000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.856000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.856000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.856000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.856000 audit: BPF prog-id=27 op=LOAD Jul 15 11:22:03.856000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff9ea0578 a2=98 a3=fffff9ea0568 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.856000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.856000 audit: BPF prog-id=27 op=UNLOAD Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit: BPF prog-id=28 op=LOAD Jul 15 11:22:03.857000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff9ea0208 a2=74 a3=95 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.857000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.857000 audit: BPF prog-id=28 op=UNLOAD Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.857000 audit: BPF prog-id=29 op=LOAD Jul 15 11:22:03.857000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff9ea0268 a2=94 a3=2 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.857000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.857000 audit: BPF prog-id=29 op=UNLOAD Jul 15 11:22:03.946000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.946000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.946000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.946000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.946000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.946000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.946000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.946000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.946000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.946000 audit: BPF prog-id=30 op=LOAD Jul 15 11:22:03.946000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff9ea0228 a2=40 a3=fffff9ea0258 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.946000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.947000 audit: BPF prog-id=30 op=UNLOAD Jul 15 11:22:03.947000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.947000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=fffff9ea0340 a2=50 a3=0 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.947000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.955000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.955000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff9ea0298 a2=28 a3=fffff9ea03c8 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.955000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.955000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.955000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff9ea02c8 a2=28 a3=fffff9ea03f8 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.955000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.955000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.955000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff9ea0178 a2=28 a3=fffff9ea02a8 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.955000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.955000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.955000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff9ea02e8 a2=28 a3=fffff9ea0418 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.955000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.955000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.955000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff9ea02c8 a2=28 a3=fffff9ea03f8 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.955000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.955000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.955000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff9ea02b8 a2=28 a3=fffff9ea03e8 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.955000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.955000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.955000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff9ea02e8 a2=28 a3=fffff9ea0418 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.955000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.955000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.955000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff9ea02c8 a2=28 a3=fffff9ea03f8 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.955000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.955000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.955000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff9ea02e8 a2=28 a3=fffff9ea0418 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.955000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.955000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.955000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff9ea02b8 a2=28 a3=fffff9ea03e8 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.955000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.955000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.955000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff9ea0338 a2=28 a3=fffff9ea0478 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.955000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff9ea0070 a2=50 a3=0 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.956000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit: BPF prog-id=31 op=LOAD Jul 15 11:22:03.956000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff9ea0078 a2=94 a3=5 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.956000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.956000 audit: BPF prog-id=31 op=UNLOAD Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff9ea0180 a2=50 a3=0 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.956000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=fffff9ea02c8 a2=4 a3=3 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.956000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { confidentiality } for pid=3704 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:22:03.956000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff9ea02a8 a2=94 a3=6 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.956000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { confidentiality } for pid=3704 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:22:03.956000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff9e9fa78 a2=94 a3=83 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.956000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { perfmon } for pid=3704 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.956000 audit[3704]: AVC avc: denied { confidentiality } for pid=3704 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:22:03.956000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff9e9fa78 a2=94 a3=83 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.956000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.957000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.957000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff9ea14b8 a2=10 a3=fffff9ea15a8 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.957000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.957000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.957000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff9ea1378 a2=10 a3=fffff9ea1468 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.957000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.957000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.957000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff9ea12e8 a2=10 a3=fffff9ea1468 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.957000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.957000 audit[3704]: AVC avc: denied { bpf } for pid=3704 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:22:03.957000 audit[3704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff9ea12e8 a2=10 a3=fffff9ea1468 items=0 ppid=3536 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:03.957000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:22:03.967000 audit: BPF prog-id=26 op=UNLOAD Jul 15 11:22:04.014000 audit[3727]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=3727 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:22:04.014000 audit[3727]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffe86f49f0 a2=0 a3=ffff9c98ffa8 items=0 ppid=3536 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:04.014000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:22:04.027000 audit[3730]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=3730 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:22:04.027000 audit[3730]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe7cfdbd0 a2=0 a3=ffff9e37efa8 items=0 ppid=3536 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:04.027000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:22:04.029000 audit[3726]: NETFILTER_CFG table=raw:103 family=2 entries=21 op=nft_register_chain pid=3726 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:22:04.029000 audit[3726]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffd4197fc0 a2=0 a3=ffffae109fa8 items=0 ppid=3536 pid=3726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:04.029000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:22:04.032000 audit[3729]: NETFILTER_CFG table=filter:104 family=2 entries=94 op=nft_register_chain pid=3729 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:22:04.032000 audit[3729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffce1f58d0 a2=0 a3=ffff8c294fa8 items=0 ppid=3536 pid=3729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:04.032000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:22:04.341137 env[1322]: time="2025-07-15T11:22:04.341089994Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:04.342359 env[1322]: time="2025-07-15T11:22:04.342316775Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:04.343796 env[1322]: time="2025-07-15T11:22:04.343755128Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:04.345011 env[1322]: time="2025-07-15T11:22:04.344977589Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:04.345608 env[1322]: time="2025-07-15T11:22:04.345571059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 15 11:22:04.347755 env[1322]: time="2025-07-15T11:22:04.347719408Z" level=info msg="CreateContainer within sandbox \"1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 11:22:04.361078 env[1322]: time="2025-07-15T11:22:04.361014757Z" level=info msg="CreateContainer within sandbox \"1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9dda31c01c730205ebbe3ee57458b3442463bb5657e9ac40ec4d3c94d7a153da\"" Jul 15 11:22:04.362681 env[1322]: time="2025-07-15T11:22:04.362653000Z" level=info msg="StartContainer for \"9dda31c01c730205ebbe3ee57458b3442463bb5657e9ac40ec4d3c94d7a153da\"" Jul 15 11:22:04.440235 env[1322]: time="2025-07-15T11:22:04.440192146Z" level=info msg="StartContainer for \"9dda31c01c730205ebbe3ee57458b3442463bb5657e9ac40ec4d3c94d7a153da\" returns successfully" Jul 15 11:22:04.441479 env[1322]: time="2025-07-15T11:22:04.441446090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 11:22:04.651969 kubelet[2105]: I0715 11:22:04.651729 2105 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a26098c-3a6d-4086-9f52-c996f6c18545" path="/var/lib/kubelet/pods/3a26098c-3a6d-4086-9f52-c996f6c18545/volumes" Jul 15 11:22:04.782802 systemd[1]: run-containerd-runc-k8s.io-8f606f26d70f0337e48a4a0f9bf844dce1a95b1fae0faffb0ad0537d859a3e5b-runc.8w5CIc.mount: Deactivated successfully. Jul 15 11:22:04.925026 systemd-networkd[1098]: calie676d56c916: Gained IPv6LL Jul 15 11:22:05.629497 systemd-networkd[1098]: vxlan.calico: Gained IPv6LL Jul 15 11:22:05.925775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2523016839.mount: Deactivated successfully. Jul 15 11:22:05.941301 env[1322]: time="2025-07-15T11:22:05.941255971Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:05.943227 env[1322]: time="2025-07-15T11:22:05.943191865Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:05.945317 env[1322]: time="2025-07-15T11:22:05.945279767Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:05.947342 env[1322]: time="2025-07-15T11:22:05.947305225Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:05.947992 env[1322]: time="2025-07-15T11:22:05.947957097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 15 11:22:05.950298 env[1322]: time="2025-07-15T11:22:05.950267009Z" level=info msg="CreateContainer within sandbox \"1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 11:22:05.961586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2394604007.mount: Deactivated successfully. Jul 15 11:22:05.963793 env[1322]: time="2025-07-15T11:22:05.963742664Z" level=info msg="CreateContainer within sandbox \"1595f9aee803123d5d0d7c0dd41f64374a7a9d26616032ffc65655ffa45370fa\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"21e9c946e02cabf79ab8fcd21d777f51cac9d4caf6c7f47974129c5c218729c7\"" Jul 15 11:22:05.965243 env[1322]: time="2025-07-15T11:22:05.964944482Z" level=info msg="StartContainer for \"21e9c946e02cabf79ab8fcd21d777f51cac9d4caf6c7f47974129c5c218729c7\"" Jul 15 11:22:06.040257 env[1322]: time="2025-07-15T11:22:06.040203355Z" level=info msg="StartContainer for \"21e9c946e02cabf79ab8fcd21d777f51cac9d4caf6c7f47974129c5c218729c7\" returns successfully" Jul 15 11:22:06.783704 kubelet[2105]: I0715 11:22:06.783627 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-575985fc67-29d9v" podStartSLOduration=2.158339133 podStartE2EDuration="4.783613033s" podCreationTimestamp="2025-07-15 11:22:02 +0000 UTC" firstStartedPulling="2025-07-15 11:22:03.323756049 +0000 UTC m=+30.769261224" lastFinishedPulling="2025-07-15 11:22:05.949029949 +0000 UTC m=+33.394535124" observedRunningTime="2025-07-15 11:22:06.783343781 +0000 UTC m=+34.228848956" watchObservedRunningTime="2025-07-15 11:22:06.783613033 +0000 UTC m=+34.229118208" Jul 15 11:22:06.797000 audit[3849]: NETFILTER_CFG table=filter:105 family=2 entries=19 op=nft_register_rule pid=3849 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:06.797000 audit[3849]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe39c80b0 a2=0 a3=1 items=0 ppid=2215 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:06.797000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:06.808000 audit[3849]: NETFILTER_CFG table=nat:106 family=2 entries=21 op=nft_register_chain pid=3849 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:06.808000 audit[3849]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7044 a0=3 a1=ffffe39c80b0 a2=0 a3=1 items=0 ppid=2215 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:06.808000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:06.932503 systemd[1]: run-containerd-runc-k8s.io-21e9c946e02cabf79ab8fcd21d777f51cac9d4caf6c7f47974129c5c218729c7-runc.9WDefZ.mount: Deactivated successfully. Jul 15 11:22:08.649761 env[1322]: time="2025-07-15T11:22:08.649718401Z" level=info msg="StopPodSandbox for \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\"" Jul 15 11:22:08.737437 env[1322]: 2025-07-15 11:22:08.698 [INFO][3864] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Jul 15 11:22:08.737437 env[1322]: 2025-07-15 11:22:08.698 [INFO][3864] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" iface="eth0" netns="/var/run/netns/cni-7ceb0654-aad7-ecb9-e7db-827f6f806a05" Jul 15 11:22:08.737437 env[1322]: 2025-07-15 11:22:08.698 [INFO][3864] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" iface="eth0" netns="/var/run/netns/cni-7ceb0654-aad7-ecb9-e7db-827f6f806a05" Jul 15 11:22:08.737437 env[1322]: 2025-07-15 11:22:08.698 [INFO][3864] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" iface="eth0" netns="/var/run/netns/cni-7ceb0654-aad7-ecb9-e7db-827f6f806a05" Jul 15 11:22:08.737437 env[1322]: 2025-07-15 11:22:08.698 [INFO][3864] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Jul 15 11:22:08.737437 env[1322]: 2025-07-15 11:22:08.698 [INFO][3864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Jul 15 11:22:08.737437 env[1322]: 2025-07-15 11:22:08.720 [INFO][3873] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" HandleID="k8s-pod-network.ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Workload="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" Jul 15 11:22:08.737437 env[1322]: 2025-07-15 11:22:08.720 [INFO][3873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:08.737437 env[1322]: 2025-07-15 11:22:08.720 [INFO][3873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:08.737437 env[1322]: 2025-07-15 11:22:08.730 [WARNING][3873] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" HandleID="k8s-pod-network.ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Workload="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" Jul 15 11:22:08.737437 env[1322]: 2025-07-15 11:22:08.730 [INFO][3873] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" HandleID="k8s-pod-network.ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Workload="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" Jul 15 11:22:08.737437 env[1322]: 2025-07-15 11:22:08.731 [INFO][3873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:08.737437 env[1322]: 2025-07-15 11:22:08.735 [INFO][3864] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Jul 15 11:22:08.737970 env[1322]: time="2025-07-15T11:22:08.737614818Z" level=info msg="TearDown network for sandbox \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\" successfully" Jul 15 11:22:08.737970 env[1322]: time="2025-07-15T11:22:08.737646819Z" level=info msg="StopPodSandbox for \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\" returns successfully" Jul 15 11:22:08.740001 systemd[1]: run-netns-cni\x2d7ceb0654\x2daad7\x2decb9\x2de7db\x2d827f6f806a05.mount: Deactivated successfully. Jul 15 11:22:08.743377 systemd[1]: Started sshd@7-10.0.0.116:22-10.0.0.1:34006.service. Jul 15 11:22:08.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.116:22-10.0.0.1:34006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:08.744226 kernel: kauditd_printk_skb: 559 callbacks suppressed Jul 15 11:22:08.744302 kernel: audit: type=1130 audit(1752578528.742:406): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.116:22-10.0.0.1:34006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:08.745548 env[1322]: time="2025-07-15T11:22:08.745423240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5744454759-sfdbw,Uid:e8b304b2-34d6-4422-9aa6-042595bfafa7,Namespace:calico-system,Attempt:1,}" Jul 15 11:22:08.787000 audit[3880]: USER_ACCT pid=3880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:08.788355 sshd[3880]: Accepted publickey for core from 10.0.0.1 port 34006 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:22:08.789828 sshd[3880]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:22:08.788000 audit[3880]: CRED_ACQ pid=3880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:08.793720 kernel: audit: type=1101 audit(1752578528.787:407): pid=3880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:08.793767 kernel: audit: type=1103 audit(1752578528.788:408): pid=3880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:08.793785 kernel: audit: type=1006 audit(1752578528.788:409): pid=3880 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jul 15 11:22:08.788000 audit[3880]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9b6efc0 a2=3 a3=1 items=0 ppid=1 pid=3880 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:08.799273 kernel: audit: type=1300 audit(1752578528.788:409): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9b6efc0 a2=3 a3=1 items=0 ppid=1 pid=3880 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:08.788000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:08.801383 kernel: audit: type=1327 audit(1752578528.788:409): proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:08.803482 systemd-logind[1305]: New session 8 of user core. Jul 15 11:22:08.804610 systemd[1]: Started session-8.scope. Jul 15 11:22:08.808000 audit[3880]: USER_START pid=3880 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:08.808000 audit[3897]: CRED_ACQ pid=3897 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:08.815201 kernel: audit: type=1105 audit(1752578528.808:410): pid=3880 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:08.815277 kernel: audit: type=1103 audit(1752578528.808:411): pid=3897 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:08.917152 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 15 11:22:08.917285 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia32f32b4664: link becomes ready Jul 15 11:22:08.917718 systemd-networkd[1098]: calia32f32b4664: Link UP Jul 15 11:22:08.917882 systemd-networkd[1098]: calia32f32b4664: Gained carrier Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.810 [INFO][3882] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0 calico-kube-controllers-5744454759- calico-system e8b304b2-34d6-4422-9aa6-042595bfafa7 945 0 2025-07-15 11:21:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5744454759 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5744454759-sfdbw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia32f32b4664 [] [] }} ContainerID="ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" Namespace="calico-system" Pod="calico-kube-controllers-5744454759-sfdbw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-" Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.810 [INFO][3882] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" Namespace="calico-system" Pod="calico-kube-controllers-5744454759-sfdbw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.846 [INFO][3899] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" HandleID="k8s-pod-network.ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" Workload="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.847 [INFO][3899] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" HandleID="k8s-pod-network.ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" Workload="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c350), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5744454759-sfdbw", "timestamp":"2025-07-15 11:22:08.846867131 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.847 [INFO][3899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.847 [INFO][3899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.847 [INFO][3899] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.857 [INFO][3899] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" host="localhost" Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.861 [INFO][3899] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.865 [INFO][3899] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.867 [INFO][3899] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.869 [INFO][3899] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.869 [INFO][3899] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" host="localhost" Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.871 [INFO][3899] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397 Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.902 [INFO][3899] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" host="localhost" Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.911 [INFO][3899] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" host="localhost" Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.911 [INFO][3899] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" host="localhost" Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.911 [INFO][3899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:08.933900 env[1322]: 2025-07-15 11:22:08.911 [INFO][3899] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" HandleID="k8s-pod-network.ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" Workload="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" Jul 15 11:22:08.934531 env[1322]: 2025-07-15 11:22:08.914 [INFO][3882] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" Namespace="calico-system" Pod="calico-kube-controllers-5744454759-sfdbw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0", GenerateName:"calico-kube-controllers-5744454759-", Namespace:"calico-system", SelfLink:"", UID:"e8b304b2-34d6-4422-9aa6-042595bfafa7", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5744454759", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5744454759-sfdbw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia32f32b4664", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:08.934531 env[1322]: 2025-07-15 11:22:08.914 [INFO][3882] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" Namespace="calico-system" Pod="calico-kube-controllers-5744454759-sfdbw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" Jul 15 11:22:08.934531 env[1322]: 2025-07-15 11:22:08.914 [INFO][3882] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia32f32b4664 ContainerID="ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" Namespace="calico-system" Pod="calico-kube-controllers-5744454759-sfdbw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" Jul 15 11:22:08.934531 env[1322]: 2025-07-15 11:22:08.917 [INFO][3882] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" Namespace="calico-system" Pod="calico-kube-controllers-5744454759-sfdbw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" Jul 15 11:22:08.934531 env[1322]: 2025-07-15 11:22:08.923 [INFO][3882] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" Namespace="calico-system" Pod="calico-kube-controllers-5744454759-sfdbw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0", GenerateName:"calico-kube-controllers-5744454759-", Namespace:"calico-system", SelfLink:"", UID:"e8b304b2-34d6-4422-9aa6-042595bfafa7", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5744454759", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397", Pod:"calico-kube-controllers-5744454759-sfdbw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia32f32b4664", MAC:"2a:17:97:f8:78:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:08.934531 env[1322]: 2025-07-15 11:22:08.931 [INFO][3882] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397" Namespace="calico-system" Pod="calico-kube-controllers-5744454759-sfdbw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" Jul 15 11:22:08.941000 audit[3932]: NETFILTER_CFG table=filter:107 family=2 entries=36 op=nft_register_chain pid=3932 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:22:08.941000 audit[3932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffff91ae80 a2=0 a3=ffffa0345fa8 items=0 ppid=3536 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:08.945822 env[1322]: time="2025-07-15T11:22:08.945757069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:22:08.945822 env[1322]: time="2025-07-15T11:22:08.945798191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:22:08.945949 env[1322]: time="2025-07-15T11:22:08.945814952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:22:08.947352 kernel: audit: type=1325 audit(1752578528.941:412): table=filter:107 family=2 entries=36 op=nft_register_chain pid=3932 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:22:08.947416 kernel: audit: type=1300 audit(1752578528.941:412): arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffff91ae80 a2=0 a3=ffffa0345fa8 items=0 ppid=3536 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:08.941000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:22:08.953574 env[1322]: time="2025-07-15T11:22:08.953529650Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397 pid=3936 runtime=io.containerd.runc.v2 Jul 15 11:22:08.993791 systemd-resolved[1237]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:22:09.015526 env[1322]: time="2025-07-15T11:22:09.015402545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5744454759-sfdbw,Uid:e8b304b2-34d6-4422-9aa6-042595bfafa7,Namespace:calico-system,Attempt:1,} returns sandbox id \"ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397\"" Jul 15 11:22:09.017286 env[1322]: time="2025-07-15T11:22:09.017244423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 11:22:09.057387 sshd[3880]: pam_unix(sshd:session): session closed for user core Jul 15 11:22:09.057000 audit[3880]: USER_END pid=3880 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:09.057000 audit[3880]: CRED_DISP pid=3880 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:09.059938 systemd[1]: sshd@7-10.0.0.116:22-10.0.0.1:34006.service: Deactivated successfully. Jul 15 11:22:09.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.116:22-10.0.0.1:34006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:09.061112 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 11:22:09.061138 systemd-logind[1305]: Session 8 logged out. Waiting for processes to exit. Jul 15 11:22:09.062041 systemd-logind[1305]: Removed session 8. Jul 15 11:22:09.649346 env[1322]: time="2025-07-15T11:22:09.649297681Z" level=info msg="StopPodSandbox for \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\"" Jul 15 11:22:09.649551 env[1322]: time="2025-07-15T11:22:09.649512090Z" level=info msg="StopPodSandbox for \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\"" Jul 15 11:22:09.649614 env[1322]: time="2025-07-15T11:22:09.649415766Z" level=info msg="StopPodSandbox for \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\"" Jul 15 11:22:09.649643 env[1322]: time="2025-07-15T11:22:09.649445727Z" level=info msg="StopPodSandbox for \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\"" Jul 15 11:22:09.764855 env[1322]: 2025-07-15 11:22:09.711 [INFO][4028] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Jul 15 11:22:09.764855 env[1322]: 2025-07-15 11:22:09.712 [INFO][4028] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" iface="eth0" netns="/var/run/netns/cni-01438cc8-7b1b-840a-a773-6e2c22c8f1aa" Jul 15 11:22:09.764855 env[1322]: 2025-07-15 11:22:09.712 [INFO][4028] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" iface="eth0" netns="/var/run/netns/cni-01438cc8-7b1b-840a-a773-6e2c22c8f1aa" Jul 15 11:22:09.764855 env[1322]: 2025-07-15 11:22:09.712 [INFO][4028] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" iface="eth0" netns="/var/run/netns/cni-01438cc8-7b1b-840a-a773-6e2c22c8f1aa" Jul 15 11:22:09.764855 env[1322]: 2025-07-15 11:22:09.712 [INFO][4028] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Jul 15 11:22:09.764855 env[1322]: 2025-07-15 11:22:09.712 [INFO][4028] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Jul 15 11:22:09.764855 env[1322]: 2025-07-15 11:22:09.746 [INFO][4057] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" HandleID="k8s-pod-network.4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Workload="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" Jul 15 11:22:09.764855 env[1322]: 2025-07-15 11:22:09.746 [INFO][4057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:09.764855 env[1322]: 2025-07-15 11:22:09.747 [INFO][4057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:09.764855 env[1322]: 2025-07-15 11:22:09.755 [WARNING][4057] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" HandleID="k8s-pod-network.4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Workload="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" Jul 15 11:22:09.764855 env[1322]: 2025-07-15 11:22:09.756 [INFO][4057] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" HandleID="k8s-pod-network.4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Workload="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" Jul 15 11:22:09.764855 env[1322]: 2025-07-15 11:22:09.757 [INFO][4057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:09.764855 env[1322]: 2025-07-15 11:22:09.761 [INFO][4028] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Jul 15 11:22:09.765824 env[1322]: time="2025-07-15T11:22:09.764979156Z" level=info msg="TearDown network for sandbox \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\" successfully" Jul 15 11:22:09.765824 env[1322]: time="2025-07-15T11:22:09.765011158Z" level=info msg="StopPodSandbox for \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\" returns successfully" Jul 15 11:22:09.766230 env[1322]: time="2025-07-15T11:22:09.766197728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64cf58f847-st89j,Uid:56b3e9f3-a41a-497f-bf77-8ab0f2093996,Namespace:calico-apiserver,Attempt:1,}" Jul 15 11:22:09.767328 systemd[1]: run-netns-cni\x2d01438cc8\x2d7b1b\x2d840a\x2da773\x2d6e2c22c8f1aa.mount: Deactivated successfully. Jul 15 11:22:09.779539 env[1322]: 2025-07-15 11:22:09.717 [INFO][4026] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Jul 15 11:22:09.779539 env[1322]: 2025-07-15 11:22:09.717 [INFO][4026] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" iface="eth0" netns="/var/run/netns/cni-8d45ef00-16fa-4e7c-edfb-3abecd6c1ee6" Jul 15 11:22:09.779539 env[1322]: 2025-07-15 11:22:09.718 [INFO][4026] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" iface="eth0" netns="/var/run/netns/cni-8d45ef00-16fa-4e7c-edfb-3abecd6c1ee6" Jul 15 11:22:09.779539 env[1322]: 2025-07-15 11:22:09.718 [INFO][4026] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" iface="eth0" netns="/var/run/netns/cni-8d45ef00-16fa-4e7c-edfb-3abecd6c1ee6" Jul 15 11:22:09.779539 env[1322]: 2025-07-15 11:22:09.718 [INFO][4026] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Jul 15 11:22:09.779539 env[1322]: 2025-07-15 11:22:09.718 [INFO][4026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Jul 15 11:22:09.779539 env[1322]: 2025-07-15 11:22:09.757 [INFO][4063] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" HandleID="k8s-pod-network.a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Workload="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" Jul 15 11:22:09.779539 env[1322]: 2025-07-15 11:22:09.757 [INFO][4063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:09.779539 env[1322]: 2025-07-15 11:22:09.757 [INFO][4063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:09.779539 env[1322]: 2025-07-15 11:22:09.770 [WARNING][4063] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" HandleID="k8s-pod-network.a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Workload="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" Jul 15 11:22:09.779539 env[1322]: 2025-07-15 11:22:09.770 [INFO][4063] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" HandleID="k8s-pod-network.a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Workload="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" Jul 15 11:22:09.779539 env[1322]: 2025-07-15 11:22:09.772 [INFO][4063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:09.779539 env[1322]: 2025-07-15 11:22:09.775 [INFO][4026] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Jul 15 11:22:09.781979 env[1322]: time="2025-07-15T11:22:09.781939117Z" level=info msg="TearDown network for sandbox \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\" successfully" Jul 15 11:22:09.782077 env[1322]: time="2025-07-15T11:22:09.782059602Z" level=info msg="StopPodSandbox for \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\" returns successfully" Jul 15 11:22:09.782878 systemd[1]: run-netns-cni\x2d8d45ef00\x2d16fa\x2d4e7c\x2dedfb\x2d3abecd6c1ee6.mount: Deactivated successfully. Jul 15 11:22:09.783037 kubelet[2105]: E0715 11:22:09.783013 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:22:09.784257 env[1322]: time="2025-07-15T11:22:09.784226774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m5sfv,Uid:234de6b0-3684-41e7-9d27-ef2f8683df1a,Namespace:kube-system,Attempt:1,}" Jul 15 11:22:09.802504 env[1322]: 2025-07-15 11:22:09.722 [INFO][4027] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Jul 15 11:22:09.802504 env[1322]: 2025-07-15 11:22:09.722 [INFO][4027] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" iface="eth0" netns="/var/run/netns/cni-97e88672-74eb-1c33-10eb-7dc58c0f831a" Jul 15 11:22:09.802504 env[1322]: 2025-07-15 11:22:09.722 [INFO][4027] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" iface="eth0" netns="/var/run/netns/cni-97e88672-74eb-1c33-10eb-7dc58c0f831a" Jul 15 11:22:09.802504 env[1322]: 2025-07-15 11:22:09.734 [INFO][4027] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" iface="eth0" netns="/var/run/netns/cni-97e88672-74eb-1c33-10eb-7dc58c0f831a" Jul 15 11:22:09.802504 env[1322]: 2025-07-15 11:22:09.734 [INFO][4027] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Jul 15 11:22:09.802504 env[1322]: 2025-07-15 11:22:09.735 [INFO][4027] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Jul 15 11:22:09.802504 env[1322]: 2025-07-15 11:22:09.773 [INFO][4076] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" HandleID="k8s-pod-network.ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Workload="localhost-k8s-csi--node--driver--h54b2-eth0" Jul 15 11:22:09.802504 env[1322]: 2025-07-15 11:22:09.773 [INFO][4076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:09.802504 env[1322]: 2025-07-15 11:22:09.773 [INFO][4076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:09.802504 env[1322]: 2025-07-15 11:22:09.789 [WARNING][4076] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" HandleID="k8s-pod-network.ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Workload="localhost-k8s-csi--node--driver--h54b2-eth0" Jul 15 11:22:09.802504 env[1322]: 2025-07-15 11:22:09.789 [INFO][4076] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" HandleID="k8s-pod-network.ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Workload="localhost-k8s-csi--node--driver--h54b2-eth0" Jul 15 11:22:09.802504 env[1322]: 2025-07-15 11:22:09.791 [INFO][4076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:09.802504 env[1322]: 2025-07-15 11:22:09.798 [INFO][4027] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Jul 15 11:22:09.802504 env[1322]: time="2025-07-15T11:22:09.802206658Z" level=info msg="TearDown network for sandbox \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\" successfully" Jul 15 11:22:09.802504 env[1322]: time="2025-07-15T11:22:09.802235459Z" level=info msg="StopPodSandbox for \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\" returns successfully" Jul 15 11:22:09.803955 env[1322]: time="2025-07-15T11:22:09.803924731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h54b2,Uid:5caaf704-0a5d-4b3c-abd2-5b536ffec524,Namespace:calico-system,Attempt:1,}" Jul 15 11:22:09.808428 env[1322]: 2025-07-15 11:22:09.730 [INFO][4021] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Jul 15 11:22:09.808428 env[1322]: 2025-07-15 11:22:09.730 [INFO][4021] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" iface="eth0" netns="/var/run/netns/cni-5526f570-6b28-ad52-b57e-cb5c69ca5805" Jul 15 11:22:09.808428 env[1322]: 2025-07-15 11:22:09.730 [INFO][4021] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" iface="eth0" netns="/var/run/netns/cni-5526f570-6b28-ad52-b57e-cb5c69ca5805" Jul 15 11:22:09.808428 env[1322]: 2025-07-15 11:22:09.730 [INFO][4021] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" iface="eth0" netns="/var/run/netns/cni-5526f570-6b28-ad52-b57e-cb5c69ca5805" Jul 15 11:22:09.808428 env[1322]: 2025-07-15 11:22:09.730 [INFO][4021] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Jul 15 11:22:09.808428 env[1322]: 2025-07-15 11:22:09.730 [INFO][4021] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Jul 15 11:22:09.808428 env[1322]: 2025-07-15 11:22:09.775 [INFO][4070] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" HandleID="k8s-pod-network.3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Workload="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" Jul 15 11:22:09.808428 env[1322]: 2025-07-15 11:22:09.776 [INFO][4070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:09.808428 env[1322]: 2025-07-15 11:22:09.791 [INFO][4070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:09.808428 env[1322]: 2025-07-15 11:22:09.801 [WARNING][4070] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" HandleID="k8s-pod-network.3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Workload="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" Jul 15 11:22:09.808428 env[1322]: 2025-07-15 11:22:09.801 [INFO][4070] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" HandleID="k8s-pod-network.3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Workload="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" Jul 15 11:22:09.808428 env[1322]: 2025-07-15 11:22:09.803 [INFO][4070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:09.808428 env[1322]: 2025-07-15 11:22:09.806 [INFO][4021] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Jul 15 11:22:09.808799 env[1322]: time="2025-07-15T11:22:09.808650772Z" level=info msg="TearDown network for sandbox \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\" successfully" Jul 15 11:22:09.808799 env[1322]: time="2025-07-15T11:22:09.808675173Z" level=info msg="StopPodSandbox for \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\" returns successfully" Jul 15 11:22:09.808952 kubelet[2105]: E0715 11:22:09.808923 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:22:09.809554 env[1322]: time="2025-07-15T11:22:09.809519769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z6zgm,Uid:2bee128a-ec69-4c1c-9486-dda7cdd5da8f,Namespace:kube-system,Attempt:1,}" Jul 15 11:22:09.920425 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 15 11:22:09.920578 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0488b9e6353: link becomes ready Jul 15 11:22:09.920624 systemd-networkd[1098]: cali0488b9e6353: Link UP Jul 15 11:22:09.922904 systemd-networkd[1098]: cali0488b9e6353: Gained carrier Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.826 [INFO][4089] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0 calico-apiserver-64cf58f847- calico-apiserver 56b3e9f3-a41a-497f-bf77-8ab0f2093996 959 0 2025-07-15 11:21:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64cf58f847 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-64cf58f847-st89j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0488b9e6353 [] [] }} ContainerID="c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" Namespace="calico-apiserver" Pod="calico-apiserver-64cf58f847-st89j" WorkloadEndpoint="localhost-k8s-calico--apiserver--64cf58f847--st89j-" Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.826 [INFO][4089] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" Namespace="calico-apiserver" Pod="calico-apiserver-64cf58f847-st89j" WorkloadEndpoint="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.873 [INFO][4139] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" HandleID="k8s-pod-network.c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" Workload="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.874 [INFO][4139] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" HandleID="k8s-pod-network.c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" Workload="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a24c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-64cf58f847-st89j", "timestamp":"2025-07-15 11:22:09.873952387 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.874 [INFO][4139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.874 [INFO][4139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.874 [INFO][4139] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.886 [INFO][4139] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" host="localhost" Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.893 [INFO][4139] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.897 [INFO][4139] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.901 [INFO][4139] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.903 [INFO][4139] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.903 [INFO][4139] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" host="localhost" Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.905 [INFO][4139] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.908 [INFO][4139] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" host="localhost" Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.913 [INFO][4139] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" host="localhost" Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.913 [INFO][4139] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" host="localhost" Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.913 [INFO][4139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:09.950703 env[1322]: 2025-07-15 11:22:09.913 [INFO][4139] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" HandleID="k8s-pod-network.c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" Workload="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" Jul 15 11:22:09.951355 env[1322]: 2025-07-15 11:22:09.916 [INFO][4089] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" Namespace="calico-apiserver" Pod="calico-apiserver-64cf58f847-st89j" WorkloadEndpoint="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0", GenerateName:"calico-apiserver-64cf58f847-", Namespace:"calico-apiserver", SelfLink:"", UID:"56b3e9f3-a41a-497f-bf77-8ab0f2093996", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64cf58f847", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-64cf58f847-st89j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0488b9e6353", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:09.951355 env[1322]: 2025-07-15 11:22:09.917 [INFO][4089] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" Namespace="calico-apiserver" Pod="calico-apiserver-64cf58f847-st89j" WorkloadEndpoint="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" Jul 15 11:22:09.951355 env[1322]: 2025-07-15 11:22:09.917 [INFO][4089] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0488b9e6353 ContainerID="c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" Namespace="calico-apiserver" Pod="calico-apiserver-64cf58f847-st89j" WorkloadEndpoint="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" Jul 15 11:22:09.951355 env[1322]: 2025-07-15 11:22:09.923 [INFO][4089] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" Namespace="calico-apiserver" Pod="calico-apiserver-64cf58f847-st89j" WorkloadEndpoint="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" Jul 15 11:22:09.951355 env[1322]: 2025-07-15 11:22:09.929 [INFO][4089] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" Namespace="calico-apiserver" Pod="calico-apiserver-64cf58f847-st89j" WorkloadEndpoint="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0", GenerateName:"calico-apiserver-64cf58f847-", Namespace:"calico-apiserver", SelfLink:"", UID:"56b3e9f3-a41a-497f-bf77-8ab0f2093996", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64cf58f847", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b", Pod:"calico-apiserver-64cf58f847-st89j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0488b9e6353", MAC:"e2:84:d4:99:c7:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:09.951355 env[1322]: 2025-07-15 11:22:09.944 [INFO][4089] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b" Namespace="calico-apiserver" Pod="calico-apiserver-64cf58f847-st89j" WorkloadEndpoint="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" Jul 15 11:22:09.953000 audit[4184]: NETFILTER_CFG table=filter:108 family=2 entries=54 op=nft_register_chain pid=4184 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:22:09.953000 audit[4184]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29396 a0=3 a1=fffff31fd710 a2=0 a3=ffffa91f9fa8 items=0 ppid=3536 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:09.953000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:22:09.976322 env[1322]: time="2025-07-15T11:22:09.973719586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:22:09.976322 env[1322]: time="2025-07-15T11:22:09.973758068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:22:09.976322 env[1322]: time="2025-07-15T11:22:09.973768468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:22:09.976322 env[1322]: time="2025-07-15T11:22:09.973910434Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b pid=4196 runtime=io.containerd.runc.v2 Jul 15 11:22:10.032309 systemd-networkd[1098]: calia8ccdbdeb67: Link UP Jul 15 11:22:10.035989 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia8ccdbdeb67: link becomes ready Jul 15 11:22:10.035018 systemd-networkd[1098]: calia8ccdbdeb67: Gained carrier Jul 15 11:22:10.037714 systemd-resolved[1237]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:22:10.048082 systemd-networkd[1098]: calia32f32b4664: Gained IPv6LL Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:09.857 [INFO][4114] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--h54b2-eth0 csi-node-driver- calico-system 5caaf704-0a5d-4b3c-abd2-5b536ffec524 961 0 2025-07-15 11:21:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-h54b2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia8ccdbdeb67 [] [] }} ContainerID="a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" Namespace="calico-system" Pod="csi-node-driver-h54b2" WorkloadEndpoint="localhost-k8s-csi--node--driver--h54b2-" Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:09.857 [INFO][4114] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" Namespace="calico-system" Pod="csi-node-driver-h54b2" WorkloadEndpoint="localhost-k8s-csi--node--driver--h54b2-eth0" Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:09.894 [INFO][4154] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" HandleID="k8s-pod-network.a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" Workload="localhost-k8s-csi--node--driver--h54b2-eth0" Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:09.895 [INFO][4154] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" HandleID="k8s-pod-network.a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" Workload="localhost-k8s-csi--node--driver--h54b2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e59f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-h54b2", "timestamp":"2025-07-15 11:22:09.894913318 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:09.895 [INFO][4154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:09.914 [INFO][4154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:09.915 [INFO][4154] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:09.987 [INFO][4154] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" host="localhost" Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:10.001 [INFO][4154] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:10.005 [INFO][4154] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:10.007 [INFO][4154] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:10.008 [INFO][4154] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:10.009 [INFO][4154] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" host="localhost" Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:10.010 [INFO][4154] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061 Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:10.015 [INFO][4154] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" host="localhost" Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:10.022 [INFO][4154] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" host="localhost" Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:10.022 [INFO][4154] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" host="localhost" Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:10.022 [INFO][4154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:10.056938 env[1322]: 2025-07-15 11:22:10.022 [INFO][4154] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" HandleID="k8s-pod-network.a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" Workload="localhost-k8s-csi--node--driver--h54b2-eth0" Jul 15 11:22:10.058735 env[1322]: 2025-07-15 11:22:10.029 [INFO][4114] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" Namespace="calico-system" Pod="csi-node-driver-h54b2" WorkloadEndpoint="localhost-k8s-csi--node--driver--h54b2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h54b2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5caaf704-0a5d-4b3c-abd2-5b536ffec524", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-h54b2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia8ccdbdeb67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:10.058735 env[1322]: 2025-07-15 11:22:10.029 [INFO][4114] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" Namespace="calico-system" Pod="csi-node-driver-h54b2" WorkloadEndpoint="localhost-k8s-csi--node--driver--h54b2-eth0" Jul 15 11:22:10.058735 env[1322]: 2025-07-15 11:22:10.029 [INFO][4114] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8ccdbdeb67 ContainerID="a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" Namespace="calico-system" Pod="csi-node-driver-h54b2" WorkloadEndpoint="localhost-k8s-csi--node--driver--h54b2-eth0" Jul 15 11:22:10.058735 env[1322]: 2025-07-15 11:22:10.034 [INFO][4114] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" Namespace="calico-system" Pod="csi-node-driver-h54b2" WorkloadEndpoint="localhost-k8s-csi--node--driver--h54b2-eth0" Jul 15 11:22:10.058735 env[1322]: 2025-07-15 11:22:10.035 [INFO][4114] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" Namespace="calico-system" Pod="csi-node-driver-h54b2" WorkloadEndpoint="localhost-k8s-csi--node--driver--h54b2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h54b2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5caaf704-0a5d-4b3c-abd2-5b536ffec524", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061", Pod:"csi-node-driver-h54b2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia8ccdbdeb67", MAC:"d6:b0:6c:3b:17:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:10.058735 env[1322]: 2025-07-15 11:22:10.052 [INFO][4114] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061" Namespace="calico-system" Pod="csi-node-driver-h54b2" WorkloadEndpoint="localhost-k8s-csi--node--driver--h54b2-eth0" Jul 15 11:22:10.081000 audit[4242]: NETFILTER_CFG table=filter:109 family=2 entries=50 op=nft_register_chain pid=4242 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:22:10.081000 audit[4242]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24804 a0=3 a1=ffffe3aa58a0 a2=0 a3=ffff82d5bfa8 items=0 ppid=3536 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:10.081000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:22:10.095093 env[1322]: time="2025-07-15T11:22:10.094650964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:22:10.095093 env[1322]: time="2025-07-15T11:22:10.094744247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:22:10.095093 env[1322]: time="2025-07-15T11:22:10.094755208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:22:10.095267 env[1322]: time="2025-07-15T11:22:10.095078101Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061 pid=4249 runtime=io.containerd.runc.v2 Jul 15 11:22:10.096442 env[1322]: time="2025-07-15T11:22:10.096407796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64cf58f847-st89j,Uid:56b3e9f3-a41a-497f-bf77-8ab0f2093996,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b\"" Jul 15 11:22:10.155488 systemd-resolved[1237]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:22:10.190904 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie7b46f31f10: link becomes ready Jul 15 11:22:10.189448 systemd-networkd[1098]: calie7b46f31f10: Link UP Jul 15 11:22:10.189576 systemd-networkd[1098]: calie7b46f31f10: Gained carrier Jul 15 11:22:10.232473 env[1322]: time="2025-07-15T11:22:10.232423920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h54b2,Uid:5caaf704-0a5d-4b3c-abd2-5b536ffec524,Namespace:calico-system,Attempt:1,} returns sandbox id \"a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061\"" Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:09.881 [INFO][4101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0 coredns-7c65d6cfc9- kube-system 234de6b0-3684-41e7-9d27-ef2f8683df1a 960 0 2025-07-15 11:21:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-m5sfv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie7b46f31f10 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5sfv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m5sfv-" Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:09.881 [INFO][4101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5sfv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:09.946 [INFO][4163] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" HandleID="k8s-pod-network.272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" Workload="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:09.947 [INFO][4163] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" HandleID="k8s-pod-network.272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" Workload="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cb60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-m5sfv", "timestamp":"2025-07-15 11:22:09.946789802 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:09.947 [INFO][4163] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:10.022 [INFO][4163] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:10.022 [INFO][4163] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:10.088 [INFO][4163] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" host="localhost" Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:10.094 [INFO][4163] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:10.111 [INFO][4163] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:10.117 [INFO][4163] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:10.120 [INFO][4163] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:10.120 [INFO][4163] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" host="localhost" Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:10.142 [INFO][4163] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:10.150 [INFO][4163] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" host="localhost" Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:10.156 [INFO][4163] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" host="localhost" Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:10.156 [INFO][4163] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" host="localhost" Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:10.156 [INFO][4163] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:10.234506 env[1322]: 2025-07-15 11:22:10.156 [INFO][4163] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" HandleID="k8s-pod-network.272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" Workload="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" Jul 15 11:22:10.235072 env[1322]: 2025-07-15 11:22:10.164 [INFO][4101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5sfv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"234de6b0-3684-41e7-9d27-ef2f8683df1a", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-m5sfv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie7b46f31f10", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:10.235072 env[1322]: 2025-07-15 11:22:10.164 [INFO][4101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5sfv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" Jul 15 11:22:10.235072 env[1322]: 2025-07-15 11:22:10.164 [INFO][4101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7b46f31f10 ContainerID="272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5sfv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" Jul 15 11:22:10.235072 env[1322]: 2025-07-15 11:22:10.188 [INFO][4101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5sfv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" Jul 15 11:22:10.235072 env[1322]: 2025-07-15 11:22:10.199 [INFO][4101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5sfv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"234de6b0-3684-41e7-9d27-ef2f8683df1a", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d", Pod:"coredns-7c65d6cfc9-m5sfv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie7b46f31f10", MAC:"d2:35:96:74:8a:08", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:10.235072 env[1322]: 2025-07-15 11:22:10.229 [INFO][4101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5sfv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" Jul 15 11:22:10.250000 audit[4293]: NETFILTER_CFG table=filter:110 family=2 entries=56 op=nft_register_chain pid=4293 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:22:10.250000 audit[4293]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27764 a0=3 a1=ffffe067ced0 a2=0 a3=ffff92ac2fa8 items=0 ppid=3536 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:10.250000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:22:10.255039 env[1322]: time="2025-07-15T11:22:10.254814842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:22:10.255039 env[1322]: time="2025-07-15T11:22:10.254870924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:22:10.255039 env[1322]: time="2025-07-15T11:22:10.254891885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:22:10.255176 env[1322]: time="2025-07-15T11:22:10.255069652Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d pid=4302 runtime=io.containerd.runc.v2 Jul 15 11:22:10.260061 systemd-networkd[1098]: calic78e3875e12: Link UP Jul 15 11:22:10.261388 systemd-networkd[1098]: calic78e3875e12: Gained carrier Jul 15 11:22:10.261895 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic78e3875e12: link becomes ready Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:09.886 [INFO][4121] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0 coredns-7c65d6cfc9- kube-system 2bee128a-ec69-4c1c-9486-dda7cdd5da8f 962 0 2025-07-15 11:21:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-z6zgm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic78e3875e12 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-z6zgm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--z6zgm-" Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:09.886 [INFO][4121] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-z6zgm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:09.955 [INFO][4166] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" HandleID="k8s-pod-network.58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" Workload="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:09.955 [INFO][4166] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" HandleID="k8s-pod-network.58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" Workload="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d770), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-z6zgm", "timestamp":"2025-07-15 11:22:09.955125796 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:09.955 [INFO][4166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:10.156 [INFO][4166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:10.156 [INFO][4166] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:10.222 [INFO][4166] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" host="localhost" Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:10.231 [INFO][4166] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:10.239 [INFO][4166] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:10.241 [INFO][4166] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:10.243 [INFO][4166] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:10.244 [INFO][4166] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" host="localhost" Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:10.245 [INFO][4166] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333 Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:10.248 [INFO][4166] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" host="localhost" Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:10.255 [INFO][4166] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" host="localhost" Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:10.255 [INFO][4166] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" host="localhost" Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:10.255 [INFO][4166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:10.278299 env[1322]: 2025-07-15 11:22:10.255 [INFO][4166] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" HandleID="k8s-pod-network.58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" Workload="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" Jul 15 11:22:10.278862 env[1322]: 2025-07-15 11:22:10.257 [INFO][4121] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-z6zgm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2bee128a-ec69-4c1c-9486-dda7cdd5da8f", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-z6zgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic78e3875e12", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:10.278862 env[1322]: 2025-07-15 11:22:10.257 [INFO][4121] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-z6zgm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" Jul 15 11:22:10.278862 env[1322]: 2025-07-15 11:22:10.257 [INFO][4121] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic78e3875e12 ContainerID="58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-z6zgm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" Jul 15 11:22:10.278862 env[1322]: 2025-07-15 11:22:10.261 [INFO][4121] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-z6zgm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" Jul 15 11:22:10.278862 env[1322]: 2025-07-15 11:22:10.265 [INFO][4121] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-z6zgm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2bee128a-ec69-4c1c-9486-dda7cdd5da8f", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333", Pod:"coredns-7c65d6cfc9-z6zgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic78e3875e12", MAC:"b2:ce:bf:33:1b:40", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:10.278862 env[1322]: 2025-07-15 11:22:10.274 [INFO][4121] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-z6zgm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" Jul 15 11:22:10.288000 audit[4339]: NETFILTER_CFG table=filter:111 family=2 entries=40 op=nft_register_chain pid=4339 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:22:10.288000 audit[4339]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20312 a0=3 a1=ffffdbb98930 a2=0 a3=ffff89a26fa8 items=0 ppid=3536 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:10.288000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:22:10.297019 systemd-resolved[1237]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:22:10.300915 env[1322]: time="2025-07-15T11:22:10.300181671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:22:10.300915 env[1322]: time="2025-07-15T11:22:10.300224313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:22:10.300915 env[1322]: time="2025-07-15T11:22:10.300243314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:22:10.300915 env[1322]: time="2025-07-15T11:22:10.300430961Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333 pid=4347 runtime=io.containerd.runc.v2 Jul 15 11:22:10.321891 env[1322]: time="2025-07-15T11:22:10.321809602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m5sfv,Uid:234de6b0-3684-41e7-9d27-ef2f8683df1a,Namespace:kube-system,Attempt:1,} returns sandbox id \"272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d\"" Jul 15 11:22:10.322663 kubelet[2105]: E0715 11:22:10.322493 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:22:10.325484 env[1322]: time="2025-07-15T11:22:10.325407670Z" level=info msg="CreateContainer within sandbox \"272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:22:10.334597 systemd-resolved[1237]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:22:10.337813 env[1322]: time="2025-07-15T11:22:10.337768220Z" level=info msg="CreateContainer within sandbox \"272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eaca7d4daddebc1c0687e427d0cdf4c5d35a7987b000383f10a7b5532fd9276e\"" Jul 15 11:22:10.338480 env[1322]: time="2025-07-15T11:22:10.338453368Z" level=info msg="StartContainer for \"eaca7d4daddebc1c0687e427d0cdf4c5d35a7987b000383f10a7b5532fd9276e\"" Jul 15 11:22:10.353424 env[1322]: time="2025-07-15T11:22:10.353380903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z6zgm,Uid:2bee128a-ec69-4c1c-9486-dda7cdd5da8f,Namespace:kube-system,Attempt:1,} returns sandbox id \"58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333\"" Jul 15 11:22:10.354059 kubelet[2105]: E0715 11:22:10.354029 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:22:10.357213 env[1322]: time="2025-07-15T11:22:10.357179499Z" level=info msg="CreateContainer within sandbox \"58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:22:10.369733 env[1322]: time="2025-07-15T11:22:10.369687295Z" level=info msg="CreateContainer within sandbox \"58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c0f50888aab36116429c4f7a8455c40523e3bfab0561f597cbcb2e22cab6c7c\"" Jul 15 11:22:10.370857 env[1322]: time="2025-07-15T11:22:10.370823701Z" level=info msg="StartContainer for \"6c0f50888aab36116429c4f7a8455c40523e3bfab0561f597cbcb2e22cab6c7c\"" Jul 15 11:22:10.430989 env[1322]: time="2025-07-15T11:22:10.430942858Z" level=info msg="StartContainer for \"eaca7d4daddebc1c0687e427d0cdf4c5d35a7987b000383f10a7b5532fd9276e\" returns successfully" Jul 15 11:22:10.432430 env[1322]: time="2025-07-15T11:22:10.432335356Z" level=info msg="StartContainer for \"6c0f50888aab36116429c4f7a8455c40523e3bfab0561f597cbcb2e22cab6c7c\" returns successfully" Jul 15 11:22:10.651328 env[1322]: time="2025-07-15T11:22:10.651252935Z" level=info msg="StopPodSandbox for \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\"" Jul 15 11:22:10.736387 env[1322]: 2025-07-15 11:22:10.699 [INFO][4476] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Jul 15 11:22:10.736387 env[1322]: 2025-07-15 11:22:10.699 [INFO][4476] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" iface="eth0" netns="/var/run/netns/cni-37c76d89-0797-f511-be29-0c53af57fb5e" Jul 15 11:22:10.736387 env[1322]: 2025-07-15 11:22:10.699 [INFO][4476] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" iface="eth0" netns="/var/run/netns/cni-37c76d89-0797-f511-be29-0c53af57fb5e" Jul 15 11:22:10.736387 env[1322]: 2025-07-15 11:22:10.699 [INFO][4476] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" iface="eth0" netns="/var/run/netns/cni-37c76d89-0797-f511-be29-0c53af57fb5e" Jul 15 11:22:10.736387 env[1322]: 2025-07-15 11:22:10.699 [INFO][4476] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Jul 15 11:22:10.736387 env[1322]: 2025-07-15 11:22:10.699 [INFO][4476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Jul 15 11:22:10.736387 env[1322]: 2025-07-15 11:22:10.720 [INFO][4485] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" HandleID="k8s-pod-network.b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Workload="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" Jul 15 11:22:10.736387 env[1322]: 2025-07-15 11:22:10.720 [INFO][4485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:10.736387 env[1322]: 2025-07-15 11:22:10.720 [INFO][4485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:10.736387 env[1322]: 2025-07-15 11:22:10.731 [WARNING][4485] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" HandleID="k8s-pod-network.b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Workload="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" Jul 15 11:22:10.736387 env[1322]: 2025-07-15 11:22:10.731 [INFO][4485] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" HandleID="k8s-pod-network.b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Workload="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" Jul 15 11:22:10.736387 env[1322]: 2025-07-15 11:22:10.732 [INFO][4485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:10.736387 env[1322]: 2025-07-15 11:22:10.734 [INFO][4476] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Jul 15 11:22:10.736823 env[1322]: time="2025-07-15T11:22:10.736543288Z" level=info msg="TearDown network for sandbox \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\" successfully" Jul 15 11:22:10.736823 env[1322]: time="2025-07-15T11:22:10.736574130Z" level=info msg="StopPodSandbox for \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\" returns successfully" Jul 15 11:22:10.737491 env[1322]: time="2025-07-15T11:22:10.737463166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fnwj2,Uid:bb2b16b0-5f08-47fc-9227-ffb2cce80eb6,Namespace:calico-system,Attempt:1,}" Jul 15 11:22:10.773095 systemd[1]: run-netns-cni\x2d37c76d89\x2d0797\x2df511\x2dbe29\x2d0c53af57fb5e.mount: Deactivated successfully. Jul 15 11:22:10.773228 systemd[1]: run-netns-cni\x2d5526f570\x2d6b28\x2dad52\x2db57e\x2dcb5c69ca5805.mount: Deactivated successfully. Jul 15 11:22:10.773309 systemd[1]: run-netns-cni\x2d97e88672\x2d74eb\x2d1c33\x2d10eb\x2d7dc58c0f831a.mount: Deactivated successfully. Jul 15 11:22:10.783985 kubelet[2105]: E0715 11:22:10.783960 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:22:10.790491 kubelet[2105]: E0715 11:22:10.790039 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:22:10.801171 kubelet[2105]: I0715 11:22:10.798969 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-z6zgm" podStartSLOduration=32.79895458 podStartE2EDuration="32.79895458s" podCreationTimestamp="2025-07-15 11:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:22:10.798567644 +0000 UTC m=+38.244072819" watchObservedRunningTime="2025-07-15 11:22:10.79895458 +0000 UTC m=+38.244459755" Jul 15 11:22:10.812000 audit[4508]: NETFILTER_CFG table=filter:112 family=2 entries=18 op=nft_register_rule pid=4508 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:10.812000 audit[4508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff185bcf0 a2=0 a3=1 items=0 ppid=2215 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:10.812000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:10.817000 audit[4508]: NETFILTER_CFG table=nat:113 family=2 entries=16 op=nft_register_rule pid=4508 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:10.817000 audit[4508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4236 a0=3 a1=fffff185bcf0 a2=0 a3=1 items=0 ppid=2215 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:10.817000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:10.823477 kubelet[2105]: I0715 11:22:10.822554 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-m5sfv" podStartSLOduration=32.822524231 podStartE2EDuration="32.822524231s" podCreationTimestamp="2025-07-15 11:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:22:10.808873988 +0000 UTC m=+38.254379203" watchObservedRunningTime="2025-07-15 11:22:10.822524231 +0000 UTC m=+38.268029406" Jul 15 11:22:10.837000 audit[4512]: NETFILTER_CFG table=filter:114 family=2 entries=15 op=nft_register_rule pid=4512 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:10.837000 audit[4512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe1d07c60 a2=0 a3=1 items=0 ppid=2215 pid=4512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:10.837000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:10.850000 audit[4512]: NETFILTER_CFG table=nat:115 family=2 entries=49 op=nft_register_chain pid=4512 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:10.850000 audit[4512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20628 a0=3 a1=ffffe1d07c60 a2=0 a3=1 items=0 ppid=2215 pid=4512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:10.850000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:10.909896 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali642129b5dee: link becomes ready Jul 15 11:22:10.908228 systemd-networkd[1098]: cali642129b5dee: Link UP Jul 15 11:22:10.909233 systemd-networkd[1098]: cali642129b5dee: Gained carrier Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.830 [INFO][4492] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0 goldmane-58fd7646b9- calico-system bb2b16b0-5f08-47fc-9227-ffb2cce80eb6 995 0 2025-07-15 11:21:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-fnwj2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali642129b5dee [] [] }} ContainerID="512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" Namespace="calico-system" Pod="goldmane-58fd7646b9-fnwj2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fnwj2-" Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.830 [INFO][4492] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" Namespace="calico-system" Pod="goldmane-58fd7646b9-fnwj2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.864 [INFO][4511] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" HandleID="k8s-pod-network.512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" Workload="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.864 [INFO][4511] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" HandleID="k8s-pod-network.512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" Workload="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058c9c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-fnwj2", "timestamp":"2025-07-15 11:22:10.864037061 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.864 [INFO][4511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.864 [INFO][4511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.864 [INFO][4511] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.877 [INFO][4511] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" host="localhost" Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.881 [INFO][4511] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.885 [INFO][4511] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.886 [INFO][4511] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.888 [INFO][4511] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.889 [INFO][4511] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" host="localhost" Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.890 [INFO][4511] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928 Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.893 [INFO][4511] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" host="localhost" Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.902 [INFO][4511] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" host="localhost" Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.902 [INFO][4511] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" host="localhost" Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.902 [INFO][4511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:10.924491 env[1322]: 2025-07-15 11:22:10.902 [INFO][4511] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" HandleID="k8s-pod-network.512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" Workload="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" Jul 15 11:22:10.925290 env[1322]: 2025-07-15 11:22:10.905 [INFO][4492] cni-plugin/k8s.go 418: Populated endpoint ContainerID="512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" Namespace="calico-system" Pod="goldmane-58fd7646b9-fnwj2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"bb2b16b0-5f08-47fc-9227-ffb2cce80eb6", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-fnwj2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali642129b5dee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:10.925290 env[1322]: 2025-07-15 11:22:10.905 [INFO][4492] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" Namespace="calico-system" Pod="goldmane-58fd7646b9-fnwj2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" Jul 15 11:22:10.925290 env[1322]: 2025-07-15 11:22:10.905 [INFO][4492] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali642129b5dee ContainerID="512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" Namespace="calico-system" Pod="goldmane-58fd7646b9-fnwj2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" Jul 15 11:22:10.925290 env[1322]: 2025-07-15 11:22:10.909 [INFO][4492] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" Namespace="calico-system" Pod="goldmane-58fd7646b9-fnwj2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" Jul 15 11:22:10.925290 env[1322]: 2025-07-15 11:22:10.910 [INFO][4492] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" Namespace="calico-system" Pod="goldmane-58fd7646b9-fnwj2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"bb2b16b0-5f08-47fc-9227-ffb2cce80eb6", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928", Pod:"goldmane-58fd7646b9-fnwj2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali642129b5dee", MAC:"6a:18:2d:ca:02:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:10.925290 env[1322]: 2025-07-15 11:22:10.919 [INFO][4492] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928" Namespace="calico-system" Pod="goldmane-58fd7646b9-fnwj2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" Jul 15 11:22:10.933000 audit[4530]: NETFILTER_CFG table=filter:116 family=2 entries=56 op=nft_register_chain pid=4530 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:22:10.933000 audit[4530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28712 a0=3 a1=ffffe7a3cff0 a2=0 a3=ffff993ecfa8 items=0 ppid=3536 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:10.933000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:22:10.978953 env[1322]: time="2025-07-15T11:22:10.978099040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:22:10.978953 env[1322]: time="2025-07-15T11:22:10.978154082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:22:10.978953 env[1322]: time="2025-07-15T11:22:10.978179763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:22:10.978953 env[1322]: time="2025-07-15T11:22:10.978359731Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928 pid=4539 runtime=io.containerd.runc.v2 Jul 15 11:22:11.046809 systemd-resolved[1237]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:22:11.071536 env[1322]: time="2025-07-15T11:22:11.071485762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fnwj2,Uid:bb2b16b0-5f08-47fc-9227-ffb2cce80eb6,Namespace:calico-system,Attempt:1,} returns sandbox id \"512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928\"" Jul 15 11:22:11.134164 systemd-networkd[1098]: cali0488b9e6353: Gained IPv6LL Jul 15 11:22:11.146359 env[1322]: time="2025-07-15T11:22:11.146292313Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:11.148406 env[1322]: time="2025-07-15T11:22:11.148359555Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:11.149685 env[1322]: time="2025-07-15T11:22:11.149641687Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:11.151514 env[1322]: time="2025-07-15T11:22:11.151464280Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:11.152890 env[1322]: time="2025-07-15T11:22:11.152244231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 15 11:22:11.154576 env[1322]: time="2025-07-15T11:22:11.153529202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 11:22:11.163777 env[1322]: time="2025-07-15T11:22:11.163682808Z" level=info msg="CreateContainer within sandbox \"ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 11:22:11.173212 env[1322]: time="2025-07-15T11:22:11.173161227Z" level=info msg="CreateContainer within sandbox \"ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"370ba2a875bf6fc86440f17969d7cb6d5fd534939eadba7c1561f91b8eb4562d\"" Jul 15 11:22:11.174661 env[1322]: time="2025-07-15T11:22:11.174627246Z" level=info msg="StartContainer for \"370ba2a875bf6fc86440f17969d7cb6d5fd534939eadba7c1561f91b8eb4562d\"" Jul 15 11:22:11.232625 env[1322]: time="2025-07-15T11:22:11.232579883Z" level=info msg="StartContainer for \"370ba2a875bf6fc86440f17969d7cb6d5fd534939eadba7c1561f91b8eb4562d\" returns successfully" Jul 15 11:22:11.516981 systemd-networkd[1098]: calie7b46f31f10: Gained IPv6LL Jul 15 11:22:11.517252 systemd-networkd[1098]: calic78e3875e12: Gained IPv6LL Jul 15 11:22:11.649346 env[1322]: time="2025-07-15T11:22:11.649305786Z" level=info msg="StopPodSandbox for \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\"" Jul 15 11:22:11.729767 env[1322]: 2025-07-15 11:22:11.696 [INFO][4633] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Jul 15 11:22:11.729767 env[1322]: 2025-07-15 11:22:11.696 [INFO][4633] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" iface="eth0" netns="/var/run/netns/cni-ab65be24-3911-7bc2-fcd7-2c9f23ba97cf" Jul 15 11:22:11.729767 env[1322]: 2025-07-15 11:22:11.696 [INFO][4633] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" iface="eth0" netns="/var/run/netns/cni-ab65be24-3911-7bc2-fcd7-2c9f23ba97cf" Jul 15 11:22:11.729767 env[1322]: 2025-07-15 11:22:11.696 [INFO][4633] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" iface="eth0" netns="/var/run/netns/cni-ab65be24-3911-7bc2-fcd7-2c9f23ba97cf" Jul 15 11:22:11.729767 env[1322]: 2025-07-15 11:22:11.696 [INFO][4633] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Jul 15 11:22:11.729767 env[1322]: 2025-07-15 11:22:11.696 [INFO][4633] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Jul 15 11:22:11.729767 env[1322]: 2025-07-15 11:22:11.714 [INFO][4642] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" HandleID="k8s-pod-network.19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Workload="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" Jul 15 11:22:11.729767 env[1322]: 2025-07-15 11:22:11.714 [INFO][4642] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:11.729767 env[1322]: 2025-07-15 11:22:11.714 [INFO][4642] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:11.729767 env[1322]: 2025-07-15 11:22:11.724 [WARNING][4642] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" HandleID="k8s-pod-network.19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Workload="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" Jul 15 11:22:11.729767 env[1322]: 2025-07-15 11:22:11.724 [INFO][4642] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" HandleID="k8s-pod-network.19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Workload="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" Jul 15 11:22:11.729767 env[1322]: 2025-07-15 11:22:11.726 [INFO][4642] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:11.729767 env[1322]: 2025-07-15 11:22:11.728 [INFO][4633] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Jul 15 11:22:11.730372 env[1322]: time="2025-07-15T11:22:11.730325665Z" level=info msg="TearDown network for sandbox \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\" successfully" Jul 15 11:22:11.730446 env[1322]: time="2025-07-15T11:22:11.730428869Z" level=info msg="StopPodSandbox for \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\" returns successfully" Jul 15 11:22:11.731189 env[1322]: time="2025-07-15T11:22:11.731157098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64cf58f847-7wqks,Uid:321abb3c-e37b-40c2-9f34-4c9e458226fa,Namespace:calico-apiserver,Attempt:1,}" Jul 15 11:22:11.769112 systemd[1]: run-netns-cni\x2dab65be24\x2d3911\x2d7bc2\x2dfcd7\x2d2c9f23ba97cf.mount: Deactivated successfully. Jul 15 11:22:11.796887 kubelet[2105]: E0715 11:22:11.796694 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:22:11.796887 kubelet[2105]: E0715 11:22:11.796868 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:22:11.813263 kubelet[2105]: I0715 11:22:11.813131 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5744454759-sfdbw" podStartSLOduration=19.676592625 podStartE2EDuration="21.813115175s" podCreationTimestamp="2025-07-15 11:21:50 +0000 UTC" firstStartedPulling="2025-07-15 11:22:09.016627597 +0000 UTC m=+36.462132772" lastFinishedPulling="2025-07-15 11:22:11.153150147 +0000 UTC m=+38.598655322" observedRunningTime="2025-07-15 11:22:11.812965049 +0000 UTC m=+39.258470224" watchObservedRunningTime="2025-07-15 11:22:11.813115175 +0000 UTC m=+39.258620350" Jul 15 11:22:11.839199 systemd-networkd[1098]: calia8ccdbdeb67: Gained IPv6LL Jul 15 11:22:11.843453 systemd-networkd[1098]: calid58b0c0ba00: Link UP Jul 15 11:22:11.845420 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 15 11:22:11.845581 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid58b0c0ba00: link becomes ready Jul 15 11:22:11.845688 systemd-networkd[1098]: calid58b0c0ba00: Gained carrier Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.771 [INFO][4650] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0 calico-apiserver-64cf58f847- calico-apiserver 321abb3c-e37b-40c2-9f34-4c9e458226fa 1027 0 2025-07-15 11:21:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64cf58f847 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-64cf58f847-7wqks eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid58b0c0ba00 [] [] }} ContainerID="4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" Namespace="calico-apiserver" Pod="calico-apiserver-64cf58f847-7wqks" WorkloadEndpoint="localhost-k8s-calico--apiserver--64cf58f847--7wqks-" Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.772 [INFO][4650] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" Namespace="calico-apiserver" Pod="calico-apiserver-64cf58f847-7wqks" WorkloadEndpoint="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.798 [INFO][4666] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" HandleID="k8s-pod-network.4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" Workload="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.798 [INFO][4666] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" HandleID="k8s-pod-network.4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" Workload="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005a44e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-64cf58f847-7wqks", "timestamp":"2025-07-15 11:22:11.798742001 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.798 [INFO][4666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.799 [INFO][4666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.799 [INFO][4666] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.816 [INFO][4666] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" host="localhost" Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.820 [INFO][4666] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.824 [INFO][4666] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.825 [INFO][4666] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.827 [INFO][4666] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.828 [INFO][4666] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" host="localhost" Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.829 [INFO][4666] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76 Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.832 [INFO][4666] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" host="localhost" Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.838 [INFO][4666] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" host="localhost" Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.838 [INFO][4666] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" host="localhost" Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.839 [INFO][4666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:11.861123 env[1322]: 2025-07-15 11:22:11.839 [INFO][4666] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" HandleID="k8s-pod-network.4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" Workload="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" Jul 15 11:22:11.861736 env[1322]: 2025-07-15 11:22:11.841 [INFO][4650] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" Namespace="calico-apiserver" Pod="calico-apiserver-64cf58f847-7wqks" WorkloadEndpoint="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0", GenerateName:"calico-apiserver-64cf58f847-", Namespace:"calico-apiserver", SelfLink:"", UID:"321abb3c-e37b-40c2-9f34-4c9e458226fa", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64cf58f847", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-64cf58f847-7wqks", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid58b0c0ba00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:11.861736 env[1322]: 2025-07-15 11:22:11.841 [INFO][4650] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" Namespace="calico-apiserver" Pod="calico-apiserver-64cf58f847-7wqks" WorkloadEndpoint="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" Jul 15 11:22:11.861736 env[1322]: 2025-07-15 11:22:11.841 [INFO][4650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid58b0c0ba00 ContainerID="4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" Namespace="calico-apiserver" Pod="calico-apiserver-64cf58f847-7wqks" WorkloadEndpoint="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" Jul 15 11:22:11.861736 env[1322]: 2025-07-15 11:22:11.846 [INFO][4650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" Namespace="calico-apiserver" Pod="calico-apiserver-64cf58f847-7wqks" WorkloadEndpoint="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" Jul 15 11:22:11.861736 env[1322]: 2025-07-15 11:22:11.847 [INFO][4650] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" Namespace="calico-apiserver" Pod="calico-apiserver-64cf58f847-7wqks" WorkloadEndpoint="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0", GenerateName:"calico-apiserver-64cf58f847-", Namespace:"calico-apiserver", SelfLink:"", UID:"321abb3c-e37b-40c2-9f34-4c9e458226fa", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64cf58f847", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76", Pod:"calico-apiserver-64cf58f847-7wqks", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid58b0c0ba00", MAC:"3a:b5:41:6a:83:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:11.861736 env[1322]: 2025-07-15 11:22:11.857 [INFO][4650] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76" Namespace="calico-apiserver" Pod="calico-apiserver-64cf58f847-7wqks" WorkloadEndpoint="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" Jul 15 11:22:11.867000 audit[4682]: NETFILTER_CFG table=filter:117 family=2 entries=53 op=nft_register_chain pid=4682 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:22:11.867000 audit[4682]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26608 a0=3 a1=ffffd9e37600 a2=0 a3=ffffb846dfa8 items=0 ppid=3536 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:11.867000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:22:11.872764 env[1322]: time="2025-07-15T11:22:11.872703638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:22:11.872764 env[1322]: time="2025-07-15T11:22:11.872747400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:22:11.872764 env[1322]: time="2025-07-15T11:22:11.872758120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:22:11.872970 env[1322]: time="2025-07-15T11:22:11.872933887Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76 pid=4691 runtime=io.containerd.runc.v2 Jul 15 11:22:11.909811 systemd-resolved[1237]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:22:11.934642 env[1322]: time="2025-07-15T11:22:11.934573352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64cf58f847-7wqks,Uid:321abb3c-e37b-40c2-9f34-4c9e458226fa,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76\"" Jul 15 11:22:11.964978 systemd-networkd[1098]: cali642129b5dee: Gained IPv6LL Jul 15 11:22:12.799314 kubelet[2105]: E0715 11:22:12.799282 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:22:12.801434 kubelet[2105]: E0715 11:22:12.801407 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:22:12.802568 kubelet[2105]: I0715 11:22:12.802545 2105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:22:12.960239 env[1322]: time="2025-07-15T11:22:12.960178989Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:12.961507 env[1322]: time="2025-07-15T11:22:12.961473479Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:12.963465 env[1322]: time="2025-07-15T11:22:12.963435316Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:12.964390 env[1322]: time="2025-07-15T11:22:12.964347151Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:12.965481 env[1322]: time="2025-07-15T11:22:12.965448794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 11:22:12.966427 env[1322]: time="2025-07-15T11:22:12.966392110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 11:22:12.968081 env[1322]: time="2025-07-15T11:22:12.967408710Z" level=info msg="CreateContainer within sandbox \"c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 11:22:12.975912 env[1322]: time="2025-07-15T11:22:12.975554706Z" level=info msg="CreateContainer within sandbox \"c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4c5c0ca87c609063bc62bb4d3b64a5870ed6d105994d069c801710094b179eae\"" Jul 15 11:22:12.977190 env[1322]: time="2025-07-15T11:22:12.977161129Z" level=info msg="StartContainer for \"4c5c0ca87c609063bc62bb4d3b64a5870ed6d105994d069c801710094b179eae\"" Jul 15 11:22:12.989000 systemd-networkd[1098]: calid58b0c0ba00: Gained IPv6LL Jul 15 11:22:13.000539 systemd[1]: run-containerd-runc-k8s.io-4c5c0ca87c609063bc62bb4d3b64a5870ed6d105994d069c801710094b179eae-runc.PypxGP.mount: Deactivated successfully. Jul 15 11:22:13.036922 env[1322]: time="2025-07-15T11:22:13.036856689Z" level=info msg="StartContainer for \"4c5c0ca87c609063bc62bb4d3b64a5870ed6d105994d069c801710094b179eae\" returns successfully" Jul 15 11:22:13.830000 audit[4764]: NETFILTER_CFG table=filter:118 family=2 entries=12 op=nft_register_rule pid=4764 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:13.833098 kernel: kauditd_printk_skb: 34 callbacks suppressed Jul 15 11:22:13.833172 kernel: audit: type=1325 audit(1752578533.830:426): table=filter:118 family=2 entries=12 op=nft_register_rule pid=4764 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:13.833198 kernel: audit: type=1300 audit(1752578533.830:426): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffccc88f80 a2=0 a3=1 items=0 ppid=2215 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:13.830000 audit[4764]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffccc88f80 a2=0 a3=1 items=0 ppid=2215 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:13.830000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:13.837168 kernel: audit: type=1327 audit(1752578533.830:426): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:13.837000 audit[4764]: NETFILTER_CFG table=nat:119 family=2 entries=22 op=nft_register_rule pid=4764 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:13.837000 audit[4764]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffccc88f80 a2=0 a3=1 items=0 ppid=2215 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:13.842627 kernel: audit: type=1325 audit(1752578533.837:427): table=nat:119 family=2 entries=22 op=nft_register_rule pid=4764 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:13.842685 kernel: audit: type=1300 audit(1752578533.837:427): arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffccc88f80 a2=0 a3=1 items=0 ppid=2215 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:13.842708 kernel: audit: type=1327 audit(1752578533.837:427): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:13.837000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:13.876095 env[1322]: time="2025-07-15T11:22:13.876047594Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:13.877339 env[1322]: time="2025-07-15T11:22:13.877311162Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:13.878650 env[1322]: time="2025-07-15T11:22:13.878618171Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:13.879903 env[1322]: time="2025-07-15T11:22:13.879879139Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:13.880355 env[1322]: time="2025-07-15T11:22:13.880330996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 15 11:22:13.881690 env[1322]: time="2025-07-15T11:22:13.881554282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 11:22:13.884215 env[1322]: time="2025-07-15T11:22:13.884190502Z" level=info msg="CreateContainer within sandbox \"a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 11:22:13.895024 env[1322]: time="2025-07-15T11:22:13.894922267Z" level=info msg="CreateContainer within sandbox \"a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2943a9943c8e97cf1a14f362b2f10bcde3ff2ece8e0bc7787c9c94026544fccd\"" Jul 15 11:22:13.895575 env[1322]: time="2025-07-15T11:22:13.895552171Z" level=info msg="StartContainer for \"2943a9943c8e97cf1a14f362b2f10bcde3ff2ece8e0bc7787c9c94026544fccd\"" Jul 15 11:22:13.967776 env[1322]: time="2025-07-15T11:22:13.967727258Z" level=info msg="StartContainer for \"2943a9943c8e97cf1a14f362b2f10bcde3ff2ece8e0bc7787c9c94026544fccd\" returns successfully" Jul 15 11:22:14.060141 systemd[1]: Started sshd@8-10.0.0.116:22-10.0.0.1:41954.service. Jul 15 11:22:14.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.116:22-10.0.0.1:41954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:14.062863 kernel: audit: type=1130 audit(1752578534.059:428): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.116:22-10.0.0.1:41954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:14.105000 audit[4804]: USER_ACCT pid=4804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:14.106871 sshd[4804]: Accepted publickey for core from 10.0.0.1 port 41954 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:22:14.108867 sshd[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:22:14.107000 audit[4804]: CRED_ACQ pid=4804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:14.112738 kernel: audit: type=1101 audit(1752578534.105:429): pid=4804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:14.112781 kernel: audit: type=1103 audit(1752578534.107:430): pid=4804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:14.112800 kernel: audit: type=1006 audit(1752578534.107:431): pid=4804 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jul 15 11:22:14.107000 audit[4804]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe687bd10 a2=3 a3=1 items=0 ppid=1 pid=4804 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:14.107000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:14.117211 systemd-logind[1305]: New session 9 of user core. Jul 15 11:22:14.117625 systemd[1]: Started session-9.scope. Jul 15 11:22:14.121000 audit[4804]: USER_START pid=4804 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:14.123000 audit[4807]: CRED_ACQ pid=4807 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:14.502551 sshd[4804]: pam_unix(sshd:session): session closed for user core Jul 15 11:22:14.502000 audit[4804]: USER_END pid=4804 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:14.502000 audit[4804]: CRED_DISP pid=4804 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:14.505487 systemd[1]: sshd@8-10.0.0.116:22-10.0.0.1:41954.service: Deactivated successfully. Jul 15 11:22:14.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.116:22-10.0.0.1:41954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:14.506763 systemd-logind[1305]: Session 9 logged out. Waiting for processes to exit. Jul 15 11:22:14.506871 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 11:22:14.507534 systemd-logind[1305]: Removed session 9. Jul 15 11:22:14.808652 kubelet[2105]: I0715 11:22:14.808543 2105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:22:15.385747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2342313513.mount: Deactivated successfully. Jul 15 11:22:15.942119 env[1322]: time="2025-07-15T11:22:15.942063186Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:15.943389 env[1322]: time="2025-07-15T11:22:15.943365072Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:15.944850 env[1322]: time="2025-07-15T11:22:15.944809364Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:15.946203 env[1322]: time="2025-07-15T11:22:15.946179813Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:15.947435 env[1322]: time="2025-07-15T11:22:15.947398097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 15 11:22:15.948892 env[1322]: time="2025-07-15T11:22:15.948866510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 11:22:15.950068 env[1322]: time="2025-07-15T11:22:15.949396889Z" level=info msg="CreateContainer within sandbox \"512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 11:22:15.961903 env[1322]: time="2025-07-15T11:22:15.961831814Z" level=info msg="CreateContainer within sandbox \"512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ecee4934b417390e4b483f41b19e53fa2a694c92813c871e5e0171b310edc1ac\"" Jul 15 11:22:15.962886 env[1322]: time="2025-07-15T11:22:15.962548280Z" level=info msg="StartContainer for \"ecee4934b417390e4b483f41b19e53fa2a694c92813c871e5e0171b310edc1ac\"" Jul 15 11:22:16.023609 env[1322]: time="2025-07-15T11:22:16.023565687Z" level=info msg="StartContainer for \"ecee4934b417390e4b483f41b19e53fa2a694c92813c871e5e0171b310edc1ac\" returns successfully" Jul 15 11:22:16.261504 env[1322]: time="2025-07-15T11:22:16.261451924Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:16.263479 env[1322]: time="2025-07-15T11:22:16.263449234Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:16.264924 env[1322]: time="2025-07-15T11:22:16.264885805Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:16.266369 env[1322]: time="2025-07-15T11:22:16.266334375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:16.267021 env[1322]: time="2025-07-15T11:22:16.266989998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 11:22:16.268464 env[1322]: time="2025-07-15T11:22:16.268297964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 11:22:16.269718 env[1322]: time="2025-07-15T11:22:16.269685012Z" level=info msg="CreateContainer within sandbox \"4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 11:22:16.279326 env[1322]: time="2025-07-15T11:22:16.279248227Z" level=info msg="CreateContainer within sandbox \"4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"56b4418ae3cbc9d8231f136de852f77af3740a863c956057a5f1f3beeb817bee\"" Jul 15 11:22:16.280084 env[1322]: time="2025-07-15T11:22:16.280056655Z" level=info msg="StartContainer for \"56b4418ae3cbc9d8231f136de852f77af3740a863c956057a5f1f3beeb817bee\"" Jul 15 11:22:16.341527 env[1322]: time="2025-07-15T11:22:16.341480803Z" level=info msg="StartContainer for \"56b4418ae3cbc9d8231f136de852f77af3740a863c956057a5f1f3beeb817bee\" returns successfully" Jul 15 11:22:16.828544 kubelet[2105]: I0715 11:22:16.828482 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64cf58f847-st89j" podStartSLOduration=26.960196986 podStartE2EDuration="29.82846763s" podCreationTimestamp="2025-07-15 11:21:47 +0000 UTC" firstStartedPulling="2025-07-15 11:22:10.098001822 +0000 UTC m=+37.543506997" lastFinishedPulling="2025-07-15 11:22:12.966272466 +0000 UTC m=+40.411777641" observedRunningTime="2025-07-15 11:22:13.815905562 +0000 UTC m=+41.261410737" watchObservedRunningTime="2025-07-15 11:22:16.82846763 +0000 UTC m=+44.273972805" Jul 15 11:22:16.828939 kubelet[2105]: I0715 11:22:16.828650 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64cf58f847-7wqks" podStartSLOduration=25.496370281 podStartE2EDuration="29.828643956s" podCreationTimestamp="2025-07-15 11:21:47 +0000 UTC" firstStartedPulling="2025-07-15 11:22:11.935715918 +0000 UTC m=+39.381221093" lastFinishedPulling="2025-07-15 11:22:16.267989593 +0000 UTC m=+43.713494768" observedRunningTime="2025-07-15 11:22:16.828117777 +0000 UTC m=+44.273622952" watchObservedRunningTime="2025-07-15 11:22:16.828643956 +0000 UTC m=+44.274149091" Jul 15 11:22:16.846000 audit[4908]: NETFILTER_CFG table=filter:120 family=2 entries=12 op=nft_register_rule pid=4908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:16.846000 audit[4908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffffd3d6e0 a2=0 a3=1 items=0 ppid=2215 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:16.846000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:16.853000 audit[4908]: NETFILTER_CFG table=nat:121 family=2 entries=22 op=nft_register_rule pid=4908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:16.853000 audit[4908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffffd3d6e0 a2=0 a3=1 items=0 ppid=2215 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:16.853000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:16.868000 audit[4917]: NETFILTER_CFG table=filter:122 family=2 entries=12 op=nft_register_rule pid=4917 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:16.868000 audit[4917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffc66a07a0 a2=0 a3=1 items=0 ppid=2215 pid=4917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:16.868000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:16.876000 audit[4917]: NETFILTER_CFG table=nat:123 family=2 entries=22 op=nft_register_rule pid=4917 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:16.876000 audit[4917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffc66a07a0 a2=0 a3=1 items=0 ppid=2215 pid=4917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:16.876000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:17.399162 env[1322]: time="2025-07-15T11:22:17.399122254Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:17.402526 env[1322]: time="2025-07-15T11:22:17.402496969Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:17.404541 env[1322]: time="2025-07-15T11:22:17.404517198Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:17.406500 env[1322]: time="2025-07-15T11:22:17.406474265Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:22:17.407188 env[1322]: time="2025-07-15T11:22:17.407161688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 15 11:22:17.416118 env[1322]: time="2025-07-15T11:22:17.416090353Z" level=info msg="CreateContainer within sandbox \"a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 11:22:17.431383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2352152327.mount: Deactivated successfully. Jul 15 11:22:17.436860 env[1322]: time="2025-07-15T11:22:17.436803460Z" level=info msg="CreateContainer within sandbox \"a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bf181ae816a8ec18e1d5a6225d5fcb0e0a4b40e15fefe90d77fe39939bfab2ae\"" Jul 15 11:22:17.437319 env[1322]: time="2025-07-15T11:22:17.437289957Z" level=info msg="StartContainer for \"bf181ae816a8ec18e1d5a6225d5fcb0e0a4b40e15fefe90d77fe39939bfab2ae\"" Jul 15 11:22:17.508915 env[1322]: time="2025-07-15T11:22:17.508863281Z" level=info msg="StartContainer for \"bf181ae816a8ec18e1d5a6225d5fcb0e0a4b40e15fefe90d77fe39939bfab2ae\" returns successfully" Jul 15 11:22:17.719064 kubelet[2105]: I0715 11:22:17.719034 2105 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 11:22:17.719218 kubelet[2105]: I0715 11:22:17.719106 2105 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 11:22:17.819601 kubelet[2105]: I0715 11:22:17.819577 2105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:22:17.837704 kubelet[2105]: I0715 11:22:17.837646 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-fnwj2" podStartSLOduration=22.962302996 podStartE2EDuration="27.837630065s" podCreationTimestamp="2025-07-15 11:21:50 +0000 UTC" firstStartedPulling="2025-07-15 11:22:11.07270185 +0000 UTC m=+38.518207025" lastFinishedPulling="2025-07-15 11:22:15.948028919 +0000 UTC m=+43.393534094" observedRunningTime="2025-07-15 11:22:16.84935008 +0000 UTC m=+44.294855255" watchObservedRunningTime="2025-07-15 11:22:17.837630065 +0000 UTC m=+45.283135200" Jul 15 11:22:17.838668 kubelet[2105]: I0715 11:22:17.838632 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h54b2" podStartSLOduration=20.658248353 podStartE2EDuration="27.838622059s" podCreationTimestamp="2025-07-15 11:21:50 +0000 UTC" firstStartedPulling="2025-07-15 11:22:10.234520686 +0000 UTC m=+37.680025821" lastFinishedPulling="2025-07-15 11:22:17.414894352 +0000 UTC m=+44.860399527" observedRunningTime="2025-07-15 11:22:17.83750026 +0000 UTC m=+45.283005435" watchObservedRunningTime="2025-07-15 11:22:17.838622059 +0000 UTC m=+45.284127234" Jul 15 11:22:19.508188 systemd[1]: Started sshd@9-10.0.0.116:22-10.0.0.1:41964.service. Jul 15 11:22:19.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.116:22-10.0.0.1:41964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:19.508632 kubelet[2105]: I0715 11:22:19.507209 2105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:22:19.509567 kernel: kauditd_printk_skb: 19 callbacks suppressed Jul 15 11:22:19.509633 kernel: audit: type=1130 audit(1752578539.507:441): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.116:22-10.0.0.1:41964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:19.566000 audit[4990]: NETFILTER_CFG table=filter:124 family=2 entries=11 op=nft_register_rule pid=4990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:19.566000 audit[4990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=fffffca77610 a2=0 a3=1 items=0 ppid=2215 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:19.572123 kernel: audit: type=1325 audit(1752578539.566:442): table=filter:124 family=2 entries=11 op=nft_register_rule pid=4990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:19.572188 kernel: audit: type=1300 audit(1752578539.566:442): arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=fffffca77610 a2=0 a3=1 items=0 ppid=2215 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:19.572607 sshd[4987]: Accepted publickey for core from 10.0.0.1 port 41964 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:22:19.566000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:19.574344 kernel: audit: type=1327 audit(1752578539.566:442): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:19.574414 kernel: audit: type=1101 audit(1752578539.571:443): pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:19.571000 audit[4987]: USER_ACCT pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:19.574709 sshd[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:22:19.573000 audit[4987]: CRED_ACQ pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:19.579273 kernel: audit: type=1103 audit(1752578539.573:444): pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:19.581049 kernel: audit: type=1006 audit(1752578539.573:445): pid=4987 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jul 15 11:22:19.573000 audit[4987]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7adb310 a2=3 a3=1 items=0 ppid=1 pid=4987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:19.584360 kernel: audit: type=1300 audit(1752578539.573:445): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7adb310 a2=3 a3=1 items=0 ppid=1 pid=4987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:19.584424 kernel: audit: type=1327 audit(1752578539.573:445): proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:19.573000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:19.584197 systemd[1]: Started session-10.scope. Jul 15 11:22:19.585521 systemd-logind[1305]: New session 10 of user core. Jul 15 11:22:19.579000 audit[4990]: NETFILTER_CFG table=nat:125 family=2 entries=29 op=nft_register_chain pid=4990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:19.595268 kernel: audit: type=1325 audit(1752578539.579:446): table=nat:125 family=2 entries=29 op=nft_register_chain pid=4990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:19.579000 audit[4990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=fffffca77610 a2=0 a3=1 items=0 ppid=2215 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:19.579000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:19.590000 audit[4987]: USER_START pid=4987 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:19.592000 audit[4992]: CRED_ACQ pid=4992 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:19.654452 kubelet[2105]: I0715 11:22:19.654348 2105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:22:19.740416 systemd[1]: run-containerd-runc-k8s.io-370ba2a875bf6fc86440f17969d7cb6d5fd534939eadba7c1561f91b8eb4562d-runc.y5TC2F.mount: Deactivated successfully. Jul 15 11:22:19.832184 sshd[4987]: pam_unix(sshd:session): session closed for user core Jul 15 11:22:19.832000 audit[4987]: USER_END pid=4987 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:19.832000 audit[4987]: CRED_DISP pid=4987 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:19.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.116:22-10.0.0.1:41964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:19.835496 systemd[1]: sshd@9-10.0.0.116:22-10.0.0.1:41964.service: Deactivated successfully. Jul 15 11:22:19.836559 systemd-logind[1305]: Session 10 logged out. Waiting for processes to exit. Jul 15 11:22:19.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.116:22-10.0.0.1:41966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:19.837934 systemd[1]: Started sshd@10-10.0.0.116:22-10.0.0.1:41966.service. Jul 15 11:22:19.838337 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 11:22:19.839526 systemd-logind[1305]: Removed session 10. Jul 15 11:22:19.879000 audit[5049]: USER_ACCT pid=5049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:19.880000 audit[5049]: CRED_ACQ pid=5049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:19.880000 audit[5049]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd1b6a4a0 a2=3 a3=1 items=0 ppid=1 pid=5049 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:19.880000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:19.881506 sshd[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:22:19.882399 sshd[5049]: Accepted publickey for core from 10.0.0.1 port 41966 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:22:19.884848 systemd-logind[1305]: New session 11 of user core. Jul 15 11:22:19.885649 systemd[1]: Started session-11.scope. Jul 15 11:22:19.888000 audit[5049]: USER_START pid=5049 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:19.889000 audit[5052]: CRED_ACQ pid=5052 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:20.053473 sshd[5049]: pam_unix(sshd:session): session closed for user core Jul 15 11:22:20.054136 systemd[1]: Started sshd@11-10.0.0.116:22-10.0.0.1:41968.service. Jul 15 11:22:20.053000 audit[5049]: USER_END pid=5049 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:20.054000 audit[5049]: CRED_DISP pid=5049 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:20.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.116:22-10.0.0.1:41968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:20.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.116:22-10.0.0.1:41966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:20.056765 systemd[1]: sshd@10-10.0.0.116:22-10.0.0.1:41966.service: Deactivated successfully. Jul 15 11:22:20.058197 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 11:22:20.058652 systemd-logind[1305]: Session 11 logged out. Waiting for processes to exit. Jul 15 11:22:20.059661 systemd-logind[1305]: Removed session 11. Jul 15 11:22:20.098000 audit[5060]: USER_ACCT pid=5060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:20.099191 sshd[5060]: Accepted publickey for core from 10.0.0.1 port 41968 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:22:20.099000 audit[5060]: CRED_ACQ pid=5060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:20.099000 audit[5060]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4504630 a2=3 a3=1 items=0 ppid=1 pid=5060 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:20.099000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:20.101206 sshd[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:22:20.105421 systemd-logind[1305]: New session 12 of user core. Jul 15 11:22:20.105495 systemd[1]: Started session-12.scope. Jul 15 11:22:20.108000 audit[5060]: USER_START pid=5060 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:20.109000 audit[5065]: CRED_ACQ pid=5065 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:20.222263 sshd[5060]: pam_unix(sshd:session): session closed for user core Jul 15 11:22:20.222000 audit[5060]: USER_END pid=5060 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:20.222000 audit[5060]: CRED_DISP pid=5060 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:20.225204 systemd-logind[1305]: Session 12 logged out. Waiting for processes to exit. Jul 15 11:22:20.225392 systemd[1]: sshd@11-10.0.0.116:22-10.0.0.1:41968.service: Deactivated successfully. Jul 15 11:22:20.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.116:22-10.0.0.1:41968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:20.226318 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 11:22:20.226720 systemd-logind[1305]: Removed session 12. Jul 15 11:22:25.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.116:22-10.0.0.1:40792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:25.225122 systemd[1]: Started sshd@12-10.0.0.116:22-10.0.0.1:40792.service. Jul 15 11:22:25.228079 kernel: kauditd_printk_skb: 29 callbacks suppressed Jul 15 11:22:25.228150 kernel: audit: type=1130 audit(1752578545.224:470): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.116:22-10.0.0.1:40792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:25.261000 audit[5090]: USER_ACCT pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:25.262908 sshd[5090]: Accepted publickey for core from 10.0.0.1 port 40792 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:22:25.263908 sshd[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:22:25.262000 audit[5090]: CRED_ACQ pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:25.267791 kernel: audit: type=1101 audit(1752578545.261:471): pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:25.267827 kernel: audit: type=1103 audit(1752578545.262:472): pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:25.269491 kernel: audit: type=1006 audit(1752578545.262:473): pid=5090 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jul 15 11:22:25.262000 audit[5090]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff1909c80 a2=3 a3=1 items=0 ppid=1 pid=5090 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:25.272297 kernel: audit: type=1300 audit(1752578545.262:473): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff1909c80 a2=3 a3=1 items=0 ppid=1 pid=5090 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:25.272358 kernel: audit: type=1327 audit(1752578545.262:473): proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:25.262000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:25.273569 systemd[1]: Started session-13.scope. Jul 15 11:22:25.273766 systemd-logind[1305]: New session 13 of user core. Jul 15 11:22:25.276000 audit[5090]: USER_START pid=5090 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:25.278000 audit[5093]: CRED_ACQ pid=5093 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:25.282944 kernel: audit: type=1105 audit(1752578545.276:474): pid=5090 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:25.282988 kernel: audit: type=1103 audit(1752578545.278:475): pid=5093 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:25.399825 sshd[5090]: pam_unix(sshd:session): session closed for user core Jul 15 11:22:25.400000 audit[5090]: USER_END pid=5090 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:25.402515 systemd[1]: sshd@12-10.0.0.116:22-10.0.0.1:40792.service: Deactivated successfully. Jul 15 11:22:25.403637 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 11:22:25.403874 systemd-logind[1305]: Session 13 logged out. Waiting for processes to exit. Jul 15 11:22:25.400000 audit[5090]: CRED_DISP pid=5090 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:25.404661 systemd-logind[1305]: Removed session 13. Jul 15 11:22:25.406099 kernel: audit: type=1106 audit(1752578545.400:476): pid=5090 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:25.406168 kernel: audit: type=1104 audit(1752578545.400:477): pid=5090 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:25.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.116:22-10.0.0.1:40792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:27.680633 systemd[1]: run-containerd-runc-k8s.io-ecee4934b417390e4b483f41b19e53fa2a694c92813c871e5e0171b310edc1ac-runc.kJwU9k.mount: Deactivated successfully. Jul 15 11:22:30.403034 systemd[1]: Started sshd@13-10.0.0.116:22-10.0.0.1:40798.service. Jul 15 11:22:30.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.116:22-10.0.0.1:40798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:30.406539 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 15 11:22:30.406652 kernel: audit: type=1130 audit(1752578550.402:479): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.116:22-10.0.0.1:40798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:30.443000 audit[5146]: USER_ACCT pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:30.444993 sshd[5146]: Accepted publickey for core from 10.0.0.1 port 40798 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:22:30.446147 sshd[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:22:30.444000 audit[5146]: CRED_ACQ pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:30.449680 kernel: audit: type=1101 audit(1752578550.443:480): pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:30.449725 kernel: audit: type=1103 audit(1752578550.444:481): pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:30.449748 kernel: audit: type=1006 audit(1752578550.444:482): pid=5146 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jul 15 11:22:30.444000 audit[5146]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe1015be0 a2=3 a3=1 items=0 ppid=1 pid=5146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:30.453565 kernel: audit: type=1300 audit(1752578550.444:482): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe1015be0 a2=3 a3=1 items=0 ppid=1 pid=5146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:30.453626 kernel: audit: type=1327 audit(1752578550.444:482): proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:30.444000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:30.453763 systemd[1]: Started session-14.scope. Jul 15 11:22:30.454057 systemd-logind[1305]: New session 14 of user core. Jul 15 11:22:30.457000 audit[5146]: USER_START pid=5146 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:30.458000 audit[5149]: CRED_ACQ pid=5149 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:30.462514 kernel: audit: type=1105 audit(1752578550.457:483): pid=5146 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:30.462575 kernel: audit: type=1103 audit(1752578550.458:484): pid=5149 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:30.618247 sshd[5146]: pam_unix(sshd:session): session closed for user core Jul 15 11:22:30.618000 audit[5146]: USER_END pid=5146 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:30.618000 audit[5146]: CRED_DISP pid=5146 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:30.622193 systemd[1]: sshd@13-10.0.0.116:22-10.0.0.1:40798.service: Deactivated successfully. Jul 15 11:22:30.623246 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 11:22:30.623565 systemd-logind[1305]: Session 14 logged out. Waiting for processes to exit. Jul 15 11:22:30.624250 systemd-logind[1305]: Removed session 14. Jul 15 11:22:30.624345 kernel: audit: type=1106 audit(1752578550.618:485): pid=5146 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:30.624383 kernel: audit: type=1104 audit(1752578550.618:486): pid=5146 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:30.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.116:22-10.0.0.1:40798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:32.628907 env[1322]: time="2025-07-15T11:22:32.628795602Z" level=info msg="StopPodSandbox for \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\"" Jul 15 11:22:32.739185 env[1322]: 2025-07-15 11:22:32.695 [WARNING][5174] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0", GenerateName:"calico-kube-controllers-5744454759-", Namespace:"calico-system", SelfLink:"", UID:"e8b304b2-34d6-4422-9aa6-042595bfafa7", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5744454759", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397", Pod:"calico-kube-controllers-5744454759-sfdbw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia32f32b4664", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:32.739185 env[1322]: 2025-07-15 11:22:32.695 [INFO][5174] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Jul 15 11:22:32.739185 env[1322]: 2025-07-15 11:22:32.695 [INFO][5174] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" iface="eth0" netns="" Jul 15 11:22:32.739185 env[1322]: 2025-07-15 11:22:32.695 [INFO][5174] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Jul 15 11:22:32.739185 env[1322]: 2025-07-15 11:22:32.695 [INFO][5174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Jul 15 11:22:32.739185 env[1322]: 2025-07-15 11:22:32.723 [INFO][5185] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" HandleID="k8s-pod-network.ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Workload="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" Jul 15 11:22:32.739185 env[1322]: 2025-07-15 11:22:32.724 [INFO][5185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:32.739185 env[1322]: 2025-07-15 11:22:32.724 [INFO][5185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:32.739185 env[1322]: 2025-07-15 11:22:32.734 [WARNING][5185] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" HandleID="k8s-pod-network.ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Workload="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" Jul 15 11:22:32.739185 env[1322]: 2025-07-15 11:22:32.734 [INFO][5185] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" HandleID="k8s-pod-network.ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Workload="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" Jul 15 11:22:32.739185 env[1322]: 2025-07-15 11:22:32.735 [INFO][5185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:32.739185 env[1322]: 2025-07-15 11:22:32.737 [INFO][5174] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Jul 15 11:22:32.739651 env[1322]: time="2025-07-15T11:22:32.739204285Z" level=info msg="TearDown network for sandbox \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\" successfully" Jul 15 11:22:32.739651 env[1322]: time="2025-07-15T11:22:32.739233126Z" level=info msg="StopPodSandbox for \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\" returns successfully" Jul 15 11:22:32.739984 env[1322]: time="2025-07-15T11:22:32.739941545Z" level=info msg="RemovePodSandbox for \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\"" Jul 15 11:22:32.740122 env[1322]: time="2025-07-15T11:22:32.740081708Z" level=info msg="Forcibly stopping sandbox \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\"" Jul 15 11:22:32.808439 env[1322]: 2025-07-15 11:22:32.773 [WARNING][5203] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0", GenerateName:"calico-kube-controllers-5744454759-", Namespace:"calico-system", SelfLink:"", UID:"e8b304b2-34d6-4422-9aa6-042595bfafa7", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5744454759", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee9041cf0413ba6dde9816f23298b8801f2b8a25a8375c42ec1d0f399d892397", Pod:"calico-kube-controllers-5744454759-sfdbw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia32f32b4664", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:32.808439 env[1322]: 2025-07-15 11:22:32.774 [INFO][5203] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Jul 15 11:22:32.808439 env[1322]: 2025-07-15 11:22:32.774 [INFO][5203] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" iface="eth0" netns="" Jul 15 11:22:32.808439 env[1322]: 2025-07-15 11:22:32.774 [INFO][5203] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Jul 15 11:22:32.808439 env[1322]: 2025-07-15 11:22:32.774 [INFO][5203] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Jul 15 11:22:32.808439 env[1322]: 2025-07-15 11:22:32.792 [INFO][5211] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" HandleID="k8s-pod-network.ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Workload="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" Jul 15 11:22:32.808439 env[1322]: 2025-07-15 11:22:32.793 [INFO][5211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:32.808439 env[1322]: 2025-07-15 11:22:32.793 [INFO][5211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:32.808439 env[1322]: 2025-07-15 11:22:32.803 [WARNING][5211] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" HandleID="k8s-pod-network.ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Workload="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" Jul 15 11:22:32.808439 env[1322]: 2025-07-15 11:22:32.803 [INFO][5211] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" HandleID="k8s-pod-network.ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Workload="localhost-k8s-calico--kube--controllers--5744454759--sfdbw-eth0" Jul 15 11:22:32.808439 env[1322]: 2025-07-15 11:22:32.805 [INFO][5211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:32.808439 env[1322]: 2025-07-15 11:22:32.806 [INFO][5203] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85" Jul 15 11:22:32.808941 env[1322]: time="2025-07-15T11:22:32.808457679Z" level=info msg="TearDown network for sandbox \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\" successfully" Jul 15 11:22:32.814912 env[1322]: time="2025-07-15T11:22:32.814875529Z" level=info msg="RemovePodSandbox \"ceaf2a2aecd30789d3a481cf5bfdf1341a06a620fb2658561fca2241442c3b85\" returns successfully" Jul 15 11:22:32.815433 env[1322]: time="2025-07-15T11:22:32.815407263Z" level=info msg="StopPodSandbox for \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\"" Jul 15 11:22:32.893072 env[1322]: 2025-07-15 11:22:32.858 [WARNING][5229] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2bee128a-ec69-4c1c-9486-dda7cdd5da8f", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333", Pod:"coredns-7c65d6cfc9-z6zgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic78e3875e12", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:32.893072 env[1322]: 2025-07-15 11:22:32.859 [INFO][5229] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Jul 15 11:22:32.893072 env[1322]: 2025-07-15 11:22:32.859 [INFO][5229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" iface="eth0" netns="" Jul 15 11:22:32.893072 env[1322]: 2025-07-15 11:22:32.859 [INFO][5229] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Jul 15 11:22:32.893072 env[1322]: 2025-07-15 11:22:32.859 [INFO][5229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Jul 15 11:22:32.893072 env[1322]: 2025-07-15 11:22:32.878 [INFO][5237] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" HandleID="k8s-pod-network.3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Workload="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" Jul 15 11:22:32.893072 env[1322]: 2025-07-15 11:22:32.879 [INFO][5237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:32.893072 env[1322]: 2025-07-15 11:22:32.879 [INFO][5237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:32.893072 env[1322]: 2025-07-15 11:22:32.887 [WARNING][5237] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" HandleID="k8s-pod-network.3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Workload="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" Jul 15 11:22:32.893072 env[1322]: 2025-07-15 11:22:32.887 [INFO][5237] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" HandleID="k8s-pod-network.3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Workload="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" Jul 15 11:22:32.893072 env[1322]: 2025-07-15 11:22:32.888 [INFO][5237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:32.893072 env[1322]: 2025-07-15 11:22:32.890 [INFO][5229] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Jul 15 11:22:32.893072 env[1322]: time="2025-07-15T11:22:32.892049132Z" level=info msg="TearDown network for sandbox \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\" successfully" Jul 15 11:22:32.893072 env[1322]: time="2025-07-15T11:22:32.892079773Z" level=info msg="StopPodSandbox for \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\" returns successfully" Jul 15 11:22:32.893072 env[1322]: time="2025-07-15T11:22:32.892530345Z" level=info msg="RemovePodSandbox for \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\"" Jul 15 11:22:32.893072 env[1322]: time="2025-07-15T11:22:32.892559705Z" level=info msg="Forcibly stopping sandbox \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\"" Jul 15 11:22:32.959066 env[1322]: 2025-07-15 11:22:32.926 [WARNING][5254] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2bee128a-ec69-4c1c-9486-dda7cdd5da8f", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58ebd140b97968d93f97b124e1502687d9ccf0d51f2abb5cf5504cdfe02dc333", Pod:"coredns-7c65d6cfc9-z6zgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic78e3875e12", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:32.959066 env[1322]: 2025-07-15 11:22:32.926 [INFO][5254] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Jul 15 11:22:32.959066 env[1322]: 2025-07-15 11:22:32.926 [INFO][5254] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" iface="eth0" netns="" Jul 15 11:22:32.959066 env[1322]: 2025-07-15 11:22:32.926 [INFO][5254] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Jul 15 11:22:32.959066 env[1322]: 2025-07-15 11:22:32.926 [INFO][5254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Jul 15 11:22:32.959066 env[1322]: 2025-07-15 11:22:32.944 [INFO][5262] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" HandleID="k8s-pod-network.3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Workload="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" Jul 15 11:22:32.959066 env[1322]: 2025-07-15 11:22:32.945 [INFO][5262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:32.959066 env[1322]: 2025-07-15 11:22:32.945 [INFO][5262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:32.959066 env[1322]: 2025-07-15 11:22:32.953 [WARNING][5262] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" HandleID="k8s-pod-network.3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Workload="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" Jul 15 11:22:32.959066 env[1322]: 2025-07-15 11:22:32.953 [INFO][5262] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" HandleID="k8s-pod-network.3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Workload="localhost-k8s-coredns--7c65d6cfc9--z6zgm-eth0" Jul 15 11:22:32.959066 env[1322]: 2025-07-15 11:22:32.954 [INFO][5262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:32.959066 env[1322]: 2025-07-15 11:22:32.956 [INFO][5254] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7" Jul 15 11:22:32.959066 env[1322]: time="2025-07-15T11:22:32.959029945Z" level=info msg="TearDown network for sandbox \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\" successfully" Jul 15 11:22:32.962344 env[1322]: time="2025-07-15T11:22:32.962314592Z" level=info msg="RemovePodSandbox \"3c63279a240c6b5f95613fcb9f0c138ec00821c32f02bc11dc3ae1cf1b97c0a7\" returns successfully" Jul 15 11:22:32.962778 env[1322]: time="2025-07-15T11:22:32.962745924Z" level=info msg="StopPodSandbox for \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\"" Jul 15 11:22:33.066622 env[1322]: 2025-07-15 11:22:33.033 [WARNING][5280] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0", GenerateName:"calico-apiserver-64cf58f847-", Namespace:"calico-apiserver", SelfLink:"", UID:"56b3e9f3-a41a-497f-bf77-8ab0f2093996", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64cf58f847", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b", Pod:"calico-apiserver-64cf58f847-st89j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0488b9e6353", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:33.066622 env[1322]: 2025-07-15 11:22:33.033 [INFO][5280] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Jul 15 11:22:33.066622 env[1322]: 2025-07-15 11:22:33.033 [INFO][5280] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" iface="eth0" netns="" Jul 15 11:22:33.066622 env[1322]: 2025-07-15 11:22:33.033 [INFO][5280] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Jul 15 11:22:33.066622 env[1322]: 2025-07-15 11:22:33.033 [INFO][5280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Jul 15 11:22:33.066622 env[1322]: 2025-07-15 11:22:33.052 [INFO][5289] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" HandleID="k8s-pod-network.4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Workload="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" Jul 15 11:22:33.066622 env[1322]: 2025-07-15 11:22:33.052 [INFO][5289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:33.066622 env[1322]: 2025-07-15 11:22:33.052 [INFO][5289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:33.066622 env[1322]: 2025-07-15 11:22:33.061 [WARNING][5289] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" HandleID="k8s-pod-network.4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Workload="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" Jul 15 11:22:33.066622 env[1322]: 2025-07-15 11:22:33.061 [INFO][5289] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" HandleID="k8s-pod-network.4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Workload="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" Jul 15 11:22:33.066622 env[1322]: 2025-07-15 11:22:33.063 [INFO][5289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:33.066622 env[1322]: 2025-07-15 11:22:33.065 [INFO][5280] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Jul 15 11:22:33.067245 env[1322]: time="2025-07-15T11:22:33.067208990Z" level=info msg="TearDown network for sandbox \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\" successfully" Jul 15 11:22:33.067318 env[1322]: time="2025-07-15T11:22:33.067298032Z" level=info msg="StopPodSandbox for \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\" returns successfully" Jul 15 11:22:33.067924 env[1322]: time="2025-07-15T11:22:33.067882408Z" level=info msg="RemovePodSandbox for \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\"" Jul 15 11:22:33.067993 env[1322]: time="2025-07-15T11:22:33.067935369Z" level=info msg="Forcibly stopping sandbox \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\"" Jul 15 11:22:33.142873 env[1322]: 2025-07-15 11:22:33.110 [WARNING][5307] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0", GenerateName:"calico-apiserver-64cf58f847-", Namespace:"calico-apiserver", SelfLink:"", UID:"56b3e9f3-a41a-497f-bf77-8ab0f2093996", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64cf58f847", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0032ec23501bd3ad65ad31e5f02a4e5a2ec11425fac0f1960d534ee0f496e7b", Pod:"calico-apiserver-64cf58f847-st89j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0488b9e6353", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:33.142873 env[1322]: 2025-07-15 11:22:33.110 [INFO][5307] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Jul 15 11:22:33.142873 env[1322]: 2025-07-15 11:22:33.110 [INFO][5307] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" iface="eth0" netns="" Jul 15 11:22:33.142873 env[1322]: 2025-07-15 11:22:33.110 [INFO][5307] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Jul 15 11:22:33.142873 env[1322]: 2025-07-15 11:22:33.110 [INFO][5307] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Jul 15 11:22:33.142873 env[1322]: 2025-07-15 11:22:33.128 [INFO][5315] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" HandleID="k8s-pod-network.4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Workload="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" Jul 15 11:22:33.142873 env[1322]: 2025-07-15 11:22:33.128 [INFO][5315] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:33.142873 env[1322]: 2025-07-15 11:22:33.128 [INFO][5315] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:33.142873 env[1322]: 2025-07-15 11:22:33.137 [WARNING][5315] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" HandleID="k8s-pod-network.4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Workload="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" Jul 15 11:22:33.142873 env[1322]: 2025-07-15 11:22:33.137 [INFO][5315] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" HandleID="k8s-pod-network.4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Workload="localhost-k8s-calico--apiserver--64cf58f847--st89j-eth0" Jul 15 11:22:33.142873 env[1322]: 2025-07-15 11:22:33.139 [INFO][5315] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:33.142873 env[1322]: 2025-07-15 11:22:33.141 [INFO][5307] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3" Jul 15 11:22:33.144916 env[1322]: time="2025-07-15T11:22:33.142897852Z" level=info msg="TearDown network for sandbox \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\" successfully" Jul 15 11:22:33.146114 env[1322]: time="2025-07-15T11:22:33.146080655Z" level=info msg="RemovePodSandbox \"4fd62df70a65e501c8ae48fda8d92672c528953c5cc67b1f9519909c16df68e3\" returns successfully" Jul 15 11:22:33.146616 env[1322]: time="2025-07-15T11:22:33.146589908Z" level=info msg="StopPodSandbox for \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\"" Jul 15 11:22:33.220994 env[1322]: 2025-07-15 11:22:33.185 [WARNING][5333] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0", GenerateName:"calico-apiserver-64cf58f847-", Namespace:"calico-apiserver", SelfLink:"", UID:"321abb3c-e37b-40c2-9f34-4c9e458226fa", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64cf58f847", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76", Pod:"calico-apiserver-64cf58f847-7wqks", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid58b0c0ba00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:33.220994 env[1322]: 2025-07-15 11:22:33.185 [INFO][5333] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Jul 15 11:22:33.220994 env[1322]: 2025-07-15 11:22:33.185 [INFO][5333] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" iface="eth0" netns="" Jul 15 11:22:33.220994 env[1322]: 2025-07-15 11:22:33.185 [INFO][5333] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Jul 15 11:22:33.220994 env[1322]: 2025-07-15 11:22:33.185 [INFO][5333] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Jul 15 11:22:33.220994 env[1322]: 2025-07-15 11:22:33.201 [INFO][5342] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" HandleID="k8s-pod-network.19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Workload="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" Jul 15 11:22:33.220994 env[1322]: 2025-07-15 11:22:33.201 [INFO][5342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:33.220994 env[1322]: 2025-07-15 11:22:33.201 [INFO][5342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:33.220994 env[1322]: 2025-07-15 11:22:33.213 [WARNING][5342] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" HandleID="k8s-pod-network.19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Workload="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" Jul 15 11:22:33.220994 env[1322]: 2025-07-15 11:22:33.213 [INFO][5342] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" HandleID="k8s-pod-network.19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Workload="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" Jul 15 11:22:33.220994 env[1322]: 2025-07-15 11:22:33.216 [INFO][5342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:33.220994 env[1322]: 2025-07-15 11:22:33.218 [INFO][5333] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Jul 15 11:22:33.221562 env[1322]: time="2025-07-15T11:22:33.221532791Z" level=info msg="TearDown network for sandbox \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\" successfully" Jul 15 11:22:33.221667 env[1322]: time="2025-07-15T11:22:33.221648674Z" level=info msg="StopPodSandbox for \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\" returns successfully" Jul 15 11:22:33.222321 env[1322]: time="2025-07-15T11:22:33.222278410Z" level=info msg="RemovePodSandbox for \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\"" Jul 15 11:22:33.222400 env[1322]: time="2025-07-15T11:22:33.222322211Z" level=info msg="Forcibly stopping sandbox \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\"" Jul 15 11:22:33.292702 env[1322]: 2025-07-15 11:22:33.259 [WARNING][5360] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0", GenerateName:"calico-apiserver-64cf58f847-", Namespace:"calico-apiserver", SelfLink:"", UID:"321abb3c-e37b-40c2-9f34-4c9e458226fa", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64cf58f847", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4cb495ab541e21107b2a8b0c699b761f599638c3ddb9182caae63c9dd3d61c76", Pod:"calico-apiserver-64cf58f847-7wqks", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid58b0c0ba00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:33.292702 env[1322]: 2025-07-15 11:22:33.259 [INFO][5360] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Jul 15 11:22:33.292702 env[1322]: 2025-07-15 11:22:33.259 [INFO][5360] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" iface="eth0" netns="" Jul 15 11:22:33.292702 env[1322]: 2025-07-15 11:22:33.259 [INFO][5360] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Jul 15 11:22:33.292702 env[1322]: 2025-07-15 11:22:33.259 [INFO][5360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Jul 15 11:22:33.292702 env[1322]: 2025-07-15 11:22:33.278 [INFO][5369] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" HandleID="k8s-pod-network.19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Workload="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" Jul 15 11:22:33.292702 env[1322]: 2025-07-15 11:22:33.278 [INFO][5369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:33.292702 env[1322]: 2025-07-15 11:22:33.278 [INFO][5369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:33.292702 env[1322]: 2025-07-15 11:22:33.287 [WARNING][5369] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" HandleID="k8s-pod-network.19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Workload="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" Jul 15 11:22:33.292702 env[1322]: 2025-07-15 11:22:33.287 [INFO][5369] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" HandleID="k8s-pod-network.19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Workload="localhost-k8s-calico--apiserver--64cf58f847--7wqks-eth0" Jul 15 11:22:33.292702 env[1322]: 2025-07-15 11:22:33.289 [INFO][5369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:33.292702 env[1322]: 2025-07-15 11:22:33.291 [INFO][5360] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e" Jul 15 11:22:33.293302 env[1322]: time="2025-07-15T11:22:33.293257629Z" level=info msg="TearDown network for sandbox \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\" successfully" Jul 15 11:22:33.296013 env[1322]: time="2025-07-15T11:22:33.295982980Z" level=info msg="RemovePodSandbox \"19f206ca3caa8460279d4f52e9b8933b1ad50ddf22b5d818bb09a8565317872e\" returns successfully" Jul 15 11:22:33.296617 env[1322]: time="2025-07-15T11:22:33.296593436Z" level=info msg="StopPodSandbox for \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\"" Jul 15 11:22:33.373425 env[1322]: 2025-07-15 11:22:33.333 [WARNING][5387] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h54b2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5caaf704-0a5d-4b3c-abd2-5b536ffec524", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061", Pod:"csi-node-driver-h54b2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia8ccdbdeb67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:33.373425 env[1322]: 2025-07-15 11:22:33.333 [INFO][5387] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Jul 15 11:22:33.373425 env[1322]: 2025-07-15 11:22:33.333 [INFO][5387] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" iface="eth0" netns="" Jul 15 11:22:33.373425 env[1322]: 2025-07-15 11:22:33.333 [INFO][5387] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Jul 15 11:22:33.373425 env[1322]: 2025-07-15 11:22:33.333 [INFO][5387] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Jul 15 11:22:33.373425 env[1322]: 2025-07-15 11:22:33.355 [INFO][5396] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" HandleID="k8s-pod-network.ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Workload="localhost-k8s-csi--node--driver--h54b2-eth0" Jul 15 11:22:33.373425 env[1322]: 2025-07-15 11:22:33.355 [INFO][5396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:33.373425 env[1322]: 2025-07-15 11:22:33.355 [INFO][5396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:33.373425 env[1322]: 2025-07-15 11:22:33.364 [WARNING][5396] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" HandleID="k8s-pod-network.ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Workload="localhost-k8s-csi--node--driver--h54b2-eth0" Jul 15 11:22:33.373425 env[1322]: 2025-07-15 11:22:33.364 [INFO][5396] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" HandleID="k8s-pod-network.ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Workload="localhost-k8s-csi--node--driver--h54b2-eth0" Jul 15 11:22:33.373425 env[1322]: 2025-07-15 11:22:33.365 [INFO][5396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:33.373425 env[1322]: 2025-07-15 11:22:33.370 [INFO][5387] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Jul 15 11:22:33.373916 env[1322]: time="2025-07-15T11:22:33.373460968Z" level=info msg="TearDown network for sandbox \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\" successfully" Jul 15 11:22:33.373916 env[1322]: time="2025-07-15T11:22:33.373496369Z" level=info msg="StopPodSandbox for \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\" returns successfully" Jul 15 11:22:33.375937 env[1322]: time="2025-07-15T11:22:33.375888512Z" level=info msg="RemovePodSandbox for \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\"" Jul 15 11:22:33.376061 env[1322]: time="2025-07-15T11:22:33.376012875Z" level=info msg="Forcibly stopping sandbox \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\"" Jul 15 11:22:33.458698 env[1322]: 2025-07-15 11:22:33.412 [WARNING][5414] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h54b2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5caaf704-0a5d-4b3c-abd2-5b536ffec524", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6794dc7e03d6ae33702e5e4075c2d40674093f8a1b17e1f800e6f02b1d82061", Pod:"csi-node-driver-h54b2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia8ccdbdeb67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:33.458698 env[1322]: 2025-07-15 11:22:33.412 [INFO][5414] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Jul 15 11:22:33.458698 env[1322]: 2025-07-15 11:22:33.412 [INFO][5414] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" iface="eth0" netns="" Jul 15 11:22:33.458698 env[1322]: 2025-07-15 11:22:33.412 [INFO][5414] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Jul 15 11:22:33.458698 env[1322]: 2025-07-15 11:22:33.412 [INFO][5414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Jul 15 11:22:33.458698 env[1322]: 2025-07-15 11:22:33.442 [INFO][5423] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" HandleID="k8s-pod-network.ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Workload="localhost-k8s-csi--node--driver--h54b2-eth0" Jul 15 11:22:33.458698 env[1322]: 2025-07-15 11:22:33.443 [INFO][5423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:33.458698 env[1322]: 2025-07-15 11:22:33.443 [INFO][5423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:33.458698 env[1322]: 2025-07-15 11:22:33.451 [WARNING][5423] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" HandleID="k8s-pod-network.ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Workload="localhost-k8s-csi--node--driver--h54b2-eth0" Jul 15 11:22:33.458698 env[1322]: 2025-07-15 11:22:33.451 [INFO][5423] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" HandleID="k8s-pod-network.ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Workload="localhost-k8s-csi--node--driver--h54b2-eth0" Jul 15 11:22:33.458698 env[1322]: 2025-07-15 11:22:33.453 [INFO][5423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:33.458698 env[1322]: 2025-07-15 11:22:33.455 [INFO][5414] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e" Jul 15 11:22:33.458698 env[1322]: time="2025-07-15T11:22:33.458667959Z" level=info msg="TearDown network for sandbox \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\" successfully" Jul 15 11:22:33.461827 env[1322]: time="2025-07-15T11:22:33.461789881Z" level=info msg="RemovePodSandbox \"ae2e2a395e836f24e6c0aa22d0c45c5f801f8c3c50f0541e750a8368da15b89e\" returns successfully" Jul 15 11:22:33.462524 env[1322]: time="2025-07-15T11:22:33.462492419Z" level=info msg="StopPodSandbox for \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\"" Jul 15 11:22:33.535875 env[1322]: 2025-07-15 11:22:33.502 [WARNING][5441] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" WorkloadEndpoint="localhost-k8s-whisker--6d7f6f864f--8v7kf-eth0" Jul 15 11:22:33.535875 env[1322]: 2025-07-15 11:22:33.502 [INFO][5441] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Jul 15 11:22:33.535875 env[1322]: 2025-07-15 11:22:33.502 [INFO][5441] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" iface="eth0" netns="" Jul 15 11:22:33.535875 env[1322]: 2025-07-15 11:22:33.502 [INFO][5441] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Jul 15 11:22:33.535875 env[1322]: 2025-07-15 11:22:33.502 [INFO][5441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Jul 15 11:22:33.535875 env[1322]: 2025-07-15 11:22:33.521 [INFO][5450] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" HandleID="k8s-pod-network.fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Workload="localhost-k8s-whisker--6d7f6f864f--8v7kf-eth0" Jul 15 11:22:33.535875 env[1322]: 2025-07-15 11:22:33.521 [INFO][5450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:33.535875 env[1322]: 2025-07-15 11:22:33.521 [INFO][5450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:33.535875 env[1322]: 2025-07-15 11:22:33.531 [WARNING][5450] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" HandleID="k8s-pod-network.fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Workload="localhost-k8s-whisker--6d7f6f864f--8v7kf-eth0" Jul 15 11:22:33.535875 env[1322]: 2025-07-15 11:22:33.531 [INFO][5450] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" HandleID="k8s-pod-network.fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Workload="localhost-k8s-whisker--6d7f6f864f--8v7kf-eth0" Jul 15 11:22:33.535875 env[1322]: 2025-07-15 11:22:33.532 [INFO][5450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:33.535875 env[1322]: 2025-07-15 11:22:33.534 [INFO][5441] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Jul 15 11:22:33.536315 env[1322]: time="2025-07-15T11:22:33.535913342Z" level=info msg="TearDown network for sandbox \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\" successfully" Jul 15 11:22:33.536315 env[1322]: time="2025-07-15T11:22:33.535945983Z" level=info msg="StopPodSandbox for \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\" returns successfully" Jul 15 11:22:33.536590 env[1322]: time="2025-07-15T11:22:33.536563159Z" level=info msg="RemovePodSandbox for \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\"" Jul 15 11:22:33.536791 env[1322]: time="2025-07-15T11:22:33.536721603Z" level=info msg="Forcibly stopping sandbox \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\"" Jul 15 11:22:33.603721 env[1322]: 2025-07-15 11:22:33.569 [WARNING][5468] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" WorkloadEndpoint="localhost-k8s-whisker--6d7f6f864f--8v7kf-eth0" Jul 15 11:22:33.603721 env[1322]: 2025-07-15 11:22:33.569 [INFO][5468] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Jul 15 11:22:33.603721 env[1322]: 2025-07-15 11:22:33.569 [INFO][5468] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" iface="eth0" netns="" Jul 15 11:22:33.603721 env[1322]: 2025-07-15 11:22:33.569 [INFO][5468] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Jul 15 11:22:33.603721 env[1322]: 2025-07-15 11:22:33.569 [INFO][5468] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Jul 15 11:22:33.603721 env[1322]: 2025-07-15 11:22:33.589 [INFO][5477] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" HandleID="k8s-pod-network.fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Workload="localhost-k8s-whisker--6d7f6f864f--8v7kf-eth0" Jul 15 11:22:33.603721 env[1322]: 2025-07-15 11:22:33.589 [INFO][5477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:33.603721 env[1322]: 2025-07-15 11:22:33.589 [INFO][5477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:33.603721 env[1322]: 2025-07-15 11:22:33.598 [WARNING][5477] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" HandleID="k8s-pod-network.fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Workload="localhost-k8s-whisker--6d7f6f864f--8v7kf-eth0" Jul 15 11:22:33.603721 env[1322]: 2025-07-15 11:22:33.598 [INFO][5477] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" HandleID="k8s-pod-network.fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Workload="localhost-k8s-whisker--6d7f6f864f--8v7kf-eth0" Jul 15 11:22:33.603721 env[1322]: 2025-07-15 11:22:33.600 [INFO][5477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:33.603721 env[1322]: 2025-07-15 11:22:33.602 [INFO][5468] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e" Jul 15 11:22:33.604243 env[1322]: time="2025-07-15T11:22:33.604200050Z" level=info msg="TearDown network for sandbox \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\" successfully" Jul 15 11:22:33.607168 env[1322]: time="2025-07-15T11:22:33.607133966Z" level=info msg="RemovePodSandbox \"fc737add3f07af382074af0af1a12d32c370df877b1c6b2a9c0863c03023684e\" returns successfully" Jul 15 11:22:33.607763 env[1322]: time="2025-07-15T11:22:33.607734662Z" level=info msg="StopPodSandbox for \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\"" Jul 15 11:22:33.673773 env[1322]: 2025-07-15 11:22:33.641 [WARNING][5495] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"234de6b0-3684-41e7-9d27-ef2f8683df1a", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d", Pod:"coredns-7c65d6cfc9-m5sfv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie7b46f31f10", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:33.673773 env[1322]: 2025-07-15 11:22:33.642 [INFO][5495] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Jul 15 11:22:33.673773 env[1322]: 2025-07-15 11:22:33.642 [INFO][5495] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" iface="eth0" netns="" Jul 15 11:22:33.673773 env[1322]: 2025-07-15 11:22:33.642 [INFO][5495] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Jul 15 11:22:33.673773 env[1322]: 2025-07-15 11:22:33.642 [INFO][5495] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Jul 15 11:22:33.673773 env[1322]: 2025-07-15 11:22:33.659 [INFO][5504] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" HandleID="k8s-pod-network.a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Workload="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" Jul 15 11:22:33.673773 env[1322]: 2025-07-15 11:22:33.660 [INFO][5504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:33.673773 env[1322]: 2025-07-15 11:22:33.660 [INFO][5504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:33.673773 env[1322]: 2025-07-15 11:22:33.668 [WARNING][5504] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" HandleID="k8s-pod-network.a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Workload="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" Jul 15 11:22:33.673773 env[1322]: 2025-07-15 11:22:33.668 [INFO][5504] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" HandleID="k8s-pod-network.a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Workload="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" Jul 15 11:22:33.673773 env[1322]: 2025-07-15 11:22:33.669 [INFO][5504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:33.673773 env[1322]: 2025-07-15 11:22:33.672 [INFO][5495] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Jul 15 11:22:33.674398 env[1322]: time="2025-07-15T11:22:33.673791632Z" level=info msg="TearDown network for sandbox \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\" successfully" Jul 15 11:22:33.674398 env[1322]: time="2025-07-15T11:22:33.673822272Z" level=info msg="StopPodSandbox for \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\" returns successfully" Jul 15 11:22:33.674879 env[1322]: time="2025-07-15T11:22:33.674829539Z" level=info msg="RemovePodSandbox for \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\"" Jul 15 11:22:33.674942 env[1322]: time="2025-07-15T11:22:33.674901221Z" level=info msg="Forcibly stopping sandbox \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\"" Jul 15 11:22:33.740947 env[1322]: 2025-07-15 11:22:33.708 [WARNING][5522] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"234de6b0-3684-41e7-9d27-ef2f8683df1a", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"272466a3dd26bf5146da82da905c5aec3c3c33cca4b653523b454b8f12be6d7d", Pod:"coredns-7c65d6cfc9-m5sfv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie7b46f31f10", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:33.740947 env[1322]: 2025-07-15 11:22:33.708 [INFO][5522] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Jul 15 11:22:33.740947 env[1322]: 2025-07-15 11:22:33.708 [INFO][5522] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" iface="eth0" netns="" Jul 15 11:22:33.740947 env[1322]: 2025-07-15 11:22:33.708 [INFO][5522] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Jul 15 11:22:33.740947 env[1322]: 2025-07-15 11:22:33.708 [INFO][5522] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Jul 15 11:22:33.740947 env[1322]: 2025-07-15 11:22:33.726 [INFO][5531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" HandleID="k8s-pod-network.a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Workload="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" Jul 15 11:22:33.740947 env[1322]: 2025-07-15 11:22:33.726 [INFO][5531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:33.740947 env[1322]: 2025-07-15 11:22:33.727 [INFO][5531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:33.740947 env[1322]: 2025-07-15 11:22:33.735 [WARNING][5531] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" HandleID="k8s-pod-network.a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Workload="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" Jul 15 11:22:33.740947 env[1322]: 2025-07-15 11:22:33.735 [INFO][5531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" HandleID="k8s-pod-network.a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Workload="localhost-k8s-coredns--7c65d6cfc9--m5sfv-eth0" Jul 15 11:22:33.740947 env[1322]: 2025-07-15 11:22:33.737 [INFO][5531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:33.740947 env[1322]: 2025-07-15 11:22:33.739 [INFO][5522] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b" Jul 15 11:22:33.741382 env[1322]: time="2025-07-15T11:22:33.740969351Z" level=info msg="TearDown network for sandbox \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\" successfully" Jul 15 11:22:33.743912 env[1322]: time="2025-07-15T11:22:33.743883147Z" level=info msg="RemovePodSandbox \"a16cc2518898de62ce70be95b53f8145b14bc5d7877c58588607cfb52c7d415b\" returns successfully" Jul 15 11:22:33.744427 env[1322]: time="2025-07-15T11:22:33.744398160Z" level=info msg="StopPodSandbox for \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\"" Jul 15 11:22:33.816231 env[1322]: 2025-07-15 11:22:33.777 [WARNING][5549] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"bb2b16b0-5f08-47fc-9227-ffb2cce80eb6", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928", Pod:"goldmane-58fd7646b9-fnwj2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali642129b5dee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:33.816231 env[1322]: 2025-07-15 11:22:33.777 [INFO][5549] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Jul 15 11:22:33.816231 env[1322]: 2025-07-15 11:22:33.777 [INFO][5549] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" iface="eth0" netns="" Jul 15 11:22:33.816231 env[1322]: 2025-07-15 11:22:33.777 [INFO][5549] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Jul 15 11:22:33.816231 env[1322]: 2025-07-15 11:22:33.777 [INFO][5549] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Jul 15 11:22:33.816231 env[1322]: 2025-07-15 11:22:33.801 [INFO][5558] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" HandleID="k8s-pod-network.b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Workload="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" Jul 15 11:22:33.816231 env[1322]: 2025-07-15 11:22:33.801 [INFO][5558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:33.816231 env[1322]: 2025-07-15 11:22:33.801 [INFO][5558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:33.816231 env[1322]: 2025-07-15 11:22:33.811 [WARNING][5558] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" HandleID="k8s-pod-network.b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Workload="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" Jul 15 11:22:33.816231 env[1322]: 2025-07-15 11:22:33.811 [INFO][5558] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" HandleID="k8s-pod-network.b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Workload="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" Jul 15 11:22:33.816231 env[1322]: 2025-07-15 11:22:33.812 [INFO][5558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:33.816231 env[1322]: 2025-07-15 11:22:33.814 [INFO][5549] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Jul 15 11:22:33.816638 env[1322]: time="2025-07-15T11:22:33.816257562Z" level=info msg="TearDown network for sandbox \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\" successfully" Jul 15 11:22:33.816638 env[1322]: time="2025-07-15T11:22:33.816289163Z" level=info msg="StopPodSandbox for \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\" returns successfully" Jul 15 11:22:33.817095 env[1322]: time="2025-07-15T11:22:33.817063023Z" level=info msg="RemovePodSandbox for \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\"" Jul 15 11:22:33.817166 env[1322]: time="2025-07-15T11:22:33.817103824Z" level=info msg="Forcibly stopping sandbox \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\"" Jul 15 11:22:33.888124 env[1322]: 2025-07-15 11:22:33.855 [WARNING][5575] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"bb2b16b0-5f08-47fc-9227-ffb2cce80eb6", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"512f8e0bdd546fd7bbeea946bd668c411dd1db4dc8c766ffcc972b6537491928", Pod:"goldmane-58fd7646b9-fnwj2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali642129b5dee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:22:33.888124 env[1322]: 2025-07-15 11:22:33.855 [INFO][5575] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Jul 15 11:22:33.888124 env[1322]: 2025-07-15 11:22:33.855 [INFO][5575] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" iface="eth0" netns="" Jul 15 11:22:33.888124 env[1322]: 2025-07-15 11:22:33.855 [INFO][5575] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Jul 15 11:22:33.888124 env[1322]: 2025-07-15 11:22:33.855 [INFO][5575] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Jul 15 11:22:33.888124 env[1322]: 2025-07-15 11:22:33.874 [INFO][5583] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" HandleID="k8s-pod-network.b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Workload="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" Jul 15 11:22:33.888124 env[1322]: 2025-07-15 11:22:33.874 [INFO][5583] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:22:33.888124 env[1322]: 2025-07-15 11:22:33.874 [INFO][5583] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:22:33.888124 env[1322]: 2025-07-15 11:22:33.882 [WARNING][5583] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" HandleID="k8s-pod-network.b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Workload="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" Jul 15 11:22:33.888124 env[1322]: 2025-07-15 11:22:33.882 [INFO][5583] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" HandleID="k8s-pod-network.b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Workload="localhost-k8s-goldmane--58fd7646b9--fnwj2-eth0" Jul 15 11:22:33.888124 env[1322]: 2025-07-15 11:22:33.884 [INFO][5583] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:22:33.888124 env[1322]: 2025-07-15 11:22:33.886 [INFO][5575] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1" Jul 15 11:22:33.888520 env[1322]: time="2025-07-15T11:22:33.888151924Z" level=info msg="TearDown network for sandbox \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\" successfully" Jul 15 11:22:33.890938 env[1322]: time="2025-07-15T11:22:33.890910116Z" level=info msg="RemovePodSandbox \"b4e912fe67186bb3b579f360b535bbf5d79e736b6718e68c05680c23eab3e3e1\" returns successfully" Jul 15 11:22:35.621768 systemd[1]: Started sshd@14-10.0.0.116:22-10.0.0.1:34972.service. Jul 15 11:22:35.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.116:22-10.0.0.1:34972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:35.624804 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 15 11:22:35.624898 kernel: audit: type=1130 audit(1752578555.620:488): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.116:22-10.0.0.1:34972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:35.669000 audit[5591]: USER_ACCT pid=5591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:35.671502 sshd[5591]: Accepted publickey for core from 10.0.0.1 port 34972 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:22:35.673160 sshd[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:22:35.670000 audit[5591]: CRED_ACQ pid=5591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:35.677138 kernel: audit: type=1101 audit(1752578555.669:489): pid=5591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:35.677191 kernel: audit: type=1103 audit(1752578555.670:490): pid=5591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:35.677208 kernel: audit: type=1006 audit(1752578555.670:491): pid=5591 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jul 15 11:22:35.677234 kernel: audit: type=1300 audit(1752578555.670:491): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7e3da80 a2=3 a3=1 items=0 ppid=1 pid=5591 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:35.670000 audit[5591]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7e3da80 a2=3 a3=1 items=0 ppid=1 pid=5591 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:35.677507 systemd-logind[1305]: New session 15 of user core. Jul 15 11:22:35.682172 kernel: audit: type=1327 audit(1752578555.670:491): proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:35.670000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:35.678766 systemd[1]: Started session-15.scope. Jul 15 11:22:35.687000 audit[5591]: USER_START pid=5591 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:35.688000 audit[5594]: CRED_ACQ pid=5594 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:35.693609 kernel: audit: type=1105 audit(1752578555.687:492): pid=5591 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:35.693655 kernel: audit: type=1103 audit(1752578555.688:493): pid=5594 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:35.957350 sshd[5591]: pam_unix(sshd:session): session closed for user core Jul 15 11:22:35.956000 audit[5591]: USER_END pid=5591 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:35.956000 audit[5591]: CRED_DISP pid=5591 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:35.960092 systemd[1]: sshd@14-10.0.0.116:22-10.0.0.1:34972.service: Deactivated successfully. Jul 15 11:22:35.961575 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 11:22:35.962694 systemd-logind[1305]: Session 15 logged out. Waiting for processes to exit. Jul 15 11:22:35.963417 kernel: audit: type=1106 audit(1752578555.956:494): pid=5591 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:35.963456 kernel: audit: type=1104 audit(1752578555.956:495): pid=5591 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:35.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.116:22-10.0.0.1:34972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:35.964047 systemd-logind[1305]: Removed session 15. Jul 15 11:22:37.661866 kubelet[2105]: I0715 11:22:37.661107 2105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:22:37.723000 audit[5606]: NETFILTER_CFG table=filter:126 family=2 entries=10 op=nft_register_rule pid=5606 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:37.723000 audit[5606]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffcc603270 a2=0 a3=1 items=0 ppid=2215 pid=5606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:37.723000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:37.736000 audit[5606]: NETFILTER_CFG table=nat:127 family=2 entries=36 op=nft_register_chain pid=5606 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:37.736000 audit[5606]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12004 a0=3 a1=ffffcc603270 a2=0 a3=1 items=0 ppid=2215 pid=5606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:37.736000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:40.960870 systemd[1]: Started sshd@15-10.0.0.116:22-10.0.0.1:34976.service. Jul 15 11:22:40.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.116:22-10.0.0.1:34976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:40.963553 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 15 11:22:40.963633 kernel: audit: type=1130 audit(1752578560.960:499): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.116:22-10.0.0.1:34976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:40.997000 audit[5609]: USER_ACCT pid=5609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:40.998528 sshd[5609]: Accepted publickey for core from 10.0.0.1 port 34976 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:22:41.000134 sshd[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:22:40.998000 audit[5609]: CRED_ACQ pid=5609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.002876 kernel: audit: type=1101 audit(1752578560.997:500): pid=5609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.002951 kernel: audit: type=1103 audit(1752578560.998:501): pid=5609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.002975 kernel: audit: type=1006 audit(1752578560.998:502): pid=5609 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jul 15 11:22:40.998000 audit[5609]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc8e74430 a2=3 a3=1 items=0 ppid=1 pid=5609 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:41.004275 systemd[1]: Started session-16.scope. Jul 15 11:22:41.004454 systemd-logind[1305]: New session 16 of user core. Jul 15 11:22:41.006460 kernel: audit: type=1300 audit(1752578560.998:502): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc8e74430 a2=3 a3=1 items=0 ppid=1 pid=5609 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:41.006518 kernel: audit: type=1327 audit(1752578560.998:502): proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:40.998000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:41.007000 audit[5609]: USER_START pid=5609 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.008000 audit[5612]: CRED_ACQ pid=5612 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.012778 kernel: audit: type=1105 audit(1752578561.007:503): pid=5609 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.012847 kernel: audit: type=1103 audit(1752578561.008:504): pid=5612 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.142372 sshd[5609]: pam_unix(sshd:session): session closed for user core Jul 15 11:22:41.143000 audit[5609]: USER_END pid=5609 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.145634 systemd[1]: Started sshd@16-10.0.0.116:22-10.0.0.1:34978.service. Jul 15 11:22:41.146801 systemd[1]: sshd@15-10.0.0.116:22-10.0.0.1:34976.service: Deactivated successfully. Jul 15 11:22:41.147803 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 11:22:41.144000 audit[5609]: CRED_DISP pid=5609 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.150382 kernel: audit: type=1106 audit(1752578561.143:505): pid=5609 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.150453 kernel: audit: type=1104 audit(1752578561.144:506): pid=5609 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.116:22-10.0.0.1:34978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:41.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.116:22-10.0.0.1:34976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:41.150583 systemd-logind[1305]: Session 16 logged out. Waiting for processes to exit. Jul 15 11:22:41.151321 systemd-logind[1305]: Removed session 16. Jul 15 11:22:41.184000 audit[5622]: USER_ACCT pid=5622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.185355 sshd[5622]: Accepted publickey for core from 10.0.0.1 port 34978 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:22:41.185000 audit[5622]: CRED_ACQ pid=5622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.185000 audit[5622]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd1851000 a2=3 a3=1 items=0 ppid=1 pid=5622 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:41.185000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:41.186765 sshd[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:22:41.190291 systemd-logind[1305]: New session 17 of user core. Jul 15 11:22:41.190550 systemd[1]: Started session-17.scope. Jul 15 11:22:41.192000 audit[5622]: USER_START pid=5622 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.194000 audit[5627]: CRED_ACQ pid=5627 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.399974 sshd[5622]: pam_unix(sshd:session): session closed for user core Jul 15 11:22:41.402445 systemd[1]: Started sshd@17-10.0.0.116:22-10.0.0.1:34986.service. Jul 15 11:22:41.401000 audit[5622]: USER_END pid=5622 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.401000 audit[5622]: CRED_DISP pid=5622 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.116:22-10.0.0.1:34986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:41.404188 systemd[1]: sshd@16-10.0.0.116:22-10.0.0.1:34978.service: Deactivated successfully. Jul 15 11:22:41.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.116:22-10.0.0.1:34978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:41.405635 systemd-logind[1305]: Session 17 logged out. Waiting for processes to exit. Jul 15 11:22:41.405686 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 11:22:41.406605 systemd-logind[1305]: Removed session 17. Jul 15 11:22:41.444000 audit[5634]: USER_ACCT pid=5634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.445473 sshd[5634]: Accepted publickey for core from 10.0.0.1 port 34986 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:22:41.445000 audit[5634]: CRED_ACQ pid=5634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.445000 audit[5634]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffb105480 a2=3 a3=1 items=0 ppid=1 pid=5634 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:41.445000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:41.447087 sshd[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:22:41.450541 systemd-logind[1305]: New session 18 of user core. Jul 15 11:22:41.451498 systemd[1]: Started session-18.scope. Jul 15 11:22:41.454000 audit[5634]: USER_START pid=5634 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:41.455000 audit[5639]: CRED_ACQ pid=5639 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:43.053000 audit[5652]: NETFILTER_CFG table=filter:128 family=2 entries=22 op=nft_register_rule pid=5652 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:43.053000 audit[5652]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12688 a0=3 a1=ffffca823560 a2=0 a3=1 items=0 ppid=2215 pid=5652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:43.053000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:43.058000 audit[5652]: NETFILTER_CFG table=nat:129 family=2 entries=24 op=nft_register_rule pid=5652 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:43.058000 audit[5652]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7308 a0=3 a1=ffffca823560 a2=0 a3=1 items=0 ppid=2215 pid=5652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:43.058000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:43.072796 sshd[5634]: pam_unix(sshd:session): session closed for user core Jul 15 11:22:43.073000 audit[5634]: USER_END pid=5634 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:43.075113 systemd[1]: Started sshd@18-10.0.0.116:22-10.0.0.1:37812.service. Jul 15 11:22:43.074000 audit[5634]: CRED_DISP pid=5634 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:43.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.116:22-10.0.0.1:37812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:43.079635 systemd[1]: sshd@17-10.0.0.116:22-10.0.0.1:34986.service: Deactivated successfully. Jul 15 11:22:43.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.116:22-10.0.0.1:34986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:43.081093 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 11:22:43.081174 systemd-logind[1305]: Session 18 logged out. Waiting for processes to exit. Jul 15 11:22:43.084045 systemd-logind[1305]: Removed session 18. Jul 15 11:22:43.086000 audit[5657]: NETFILTER_CFG table=filter:130 family=2 entries=34 op=nft_register_rule pid=5657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:43.086000 audit[5657]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12688 a0=3 a1=ffffe9db1590 a2=0 a3=1 items=0 ppid=2215 pid=5657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:43.086000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:43.091000 audit[5657]: NETFILTER_CFG table=nat:131 family=2 entries=24 op=nft_register_rule pid=5657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:43.091000 audit[5657]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7308 a0=3 a1=ffffe9db1590 a2=0 a3=1 items=0 ppid=2215 pid=5657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:43.091000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:43.122000 audit[5653]: USER_ACCT pid=5653 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:43.123791 sshd[5653]: Accepted publickey for core from 10.0.0.1 port 37812 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:22:43.124000 audit[5653]: CRED_ACQ pid=5653 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:43.124000 audit[5653]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7288670 a2=3 a3=1 items=0 ppid=1 pid=5653 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:43.124000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:43.125377 sshd[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:22:43.129388 systemd-logind[1305]: New session 19 of user core. Jul 15 11:22:43.129778 systemd[1]: Started session-19.scope. Jul 15 11:22:43.133000 audit[5653]: USER_START pid=5653 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:43.134000 audit[5660]: CRED_ACQ pid=5660 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:43.584850 sshd[5653]: pam_unix(sshd:session): session closed for user core Jul 15 11:22:43.587416 systemd[1]: Started sshd@19-10.0.0.116:22-10.0.0.1:37818.service. Jul 15 11:22:43.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.116:22-10.0.0.1:37818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:43.590000 audit[5653]: USER_END pid=5653 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:43.590000 audit[5653]: CRED_DISP pid=5653 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:43.592579 systemd[1]: sshd@18-10.0.0.116:22-10.0.0.1:37812.service: Deactivated successfully. Jul 15 11:22:43.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.116:22-10.0.0.1:37812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:43.593651 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 11:22:43.594006 systemd-logind[1305]: Session 19 logged out. Waiting for processes to exit. Jul 15 11:22:43.594752 systemd-logind[1305]: Removed session 19. Jul 15 11:22:43.630000 audit[5668]: USER_ACCT pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:43.631928 sshd[5668]: Accepted publickey for core from 10.0.0.1 port 37818 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:22:43.632000 audit[5668]: CRED_ACQ pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:43.632000 audit[5668]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff8abad10 a2=3 a3=1 items=0 ppid=1 pid=5668 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:43.632000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:43.633642 sshd[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:22:43.637590 systemd-logind[1305]: New session 20 of user core. Jul 15 11:22:43.637908 systemd[1]: Started session-20.scope. Jul 15 11:22:43.640000 audit[5668]: USER_START pid=5668 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:43.642000 audit[5673]: CRED_ACQ pid=5673 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:43.753199 sshd[5668]: pam_unix(sshd:session): session closed for user core Jul 15 11:22:43.753000 audit[5668]: USER_END pid=5668 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:43.753000 audit[5668]: CRED_DISP pid=5668 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:43.756022 systemd-logind[1305]: Session 20 logged out. Waiting for processes to exit. Jul 15 11:22:43.756231 systemd[1]: sshd@19-10.0.0.116:22-10.0.0.1:37818.service: Deactivated successfully. Jul 15 11:22:43.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.116:22-10.0.0.1:37818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:43.757025 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 11:22:43.757409 systemd-logind[1305]: Removed session 20. Jul 15 11:22:48.310000 audit[5691]: NETFILTER_CFG table=filter:132 family=2 entries=22 op=nft_register_rule pid=5691 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:48.314019 kernel: kauditd_printk_skb: 57 callbacks suppressed Jul 15 11:22:48.314101 kernel: audit: type=1325 audit(1752578568.310:548): table=filter:132 family=2 entries=22 op=nft_register_rule pid=5691 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:48.314139 kernel: audit: type=1300 audit(1752578568.310:548): arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=fffff4550270 a2=0 a3=1 items=0 ppid=2215 pid=5691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:48.310000 audit[5691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=fffff4550270 a2=0 a3=1 items=0 ppid=2215 pid=5691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:48.310000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:48.318816 kernel: audit: type=1327 audit(1752578568.310:548): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:48.319000 audit[5691]: NETFILTER_CFG table=nat:133 family=2 entries=108 op=nft_register_chain pid=5691 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:48.319000 audit[5691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=50220 a0=3 a1=fffff4550270 a2=0 a3=1 items=0 ppid=2215 pid=5691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:48.325242 kernel: audit: type=1325 audit(1752578568.319:549): table=nat:133 family=2 entries=108 op=nft_register_chain pid=5691 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:48.325315 kernel: audit: type=1300 audit(1752578568.319:549): arch=c00000b7 syscall=211 success=yes exit=50220 a0=3 a1=fffff4550270 a2=0 a3=1 items=0 ppid=2215 pid=5691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:48.325337 kernel: audit: type=1327 audit(1752578568.319:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:48.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:48.756490 systemd[1]: Started sshd@20-10.0.0.116:22-10.0.0.1:37834.service. Jul 15 11:22:48.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.116:22-10.0.0.1:37834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:48.758867 kernel: audit: type=1130 audit(1752578568.755:550): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.116:22-10.0.0.1:37834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:48.793000 audit[5693]: USER_ACCT pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:48.794189 sshd[5693]: Accepted publickey for core from 10.0.0.1 port 37834 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:22:48.795605 sshd[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:22:48.794000 audit[5693]: CRED_ACQ pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:48.798532 kernel: audit: type=1101 audit(1752578568.793:551): pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:48.798598 kernel: audit: type=1103 audit(1752578568.794:552): pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:48.798619 kernel: audit: type=1006 audit(1752578568.794:553): pid=5693 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jul 15 11:22:48.794000 audit[5693]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcc2410b0 a2=3 a3=1 items=0 ppid=1 pid=5693 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:48.794000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:48.799889 systemd[1]: Started session-21.scope. Jul 15 11:22:48.800264 systemd-logind[1305]: New session 21 of user core. Jul 15 11:22:48.803000 audit[5693]: USER_START pid=5693 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:48.804000 audit[5696]: CRED_ACQ pid=5696 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:48.934175 sshd[5693]: pam_unix(sshd:session): session closed for user core Jul 15 11:22:48.934000 audit[5693]: USER_END pid=5693 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:48.934000 audit[5693]: CRED_DISP pid=5693 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:48.936690 systemd[1]: sshd@20-10.0.0.116:22-10.0.0.1:37834.service: Deactivated successfully. Jul 15 11:22:48.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.116:22-10.0.0.1:37834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:48.937570 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 11:22:48.938400 systemd-logind[1305]: Session 21 logged out. Waiting for processes to exit. Jul 15 11:22:48.939304 systemd-logind[1305]: Removed session 21. Jul 15 11:22:53.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.116:22-10.0.0.1:53372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:53.937414 systemd[1]: Started sshd@21-10.0.0.116:22-10.0.0.1:53372.service. Jul 15 11:22:53.938092 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 15 11:22:53.938140 kernel: audit: type=1130 audit(1752578573.936:559): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.116:22-10.0.0.1:53372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:53.981997 sshd[5749]: Accepted publickey for core from 10.0.0.1 port 53372 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:22:53.981000 audit[5749]: USER_ACCT pid=5749 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:53.983723 sshd[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:22:53.982000 audit[5749]: CRED_ACQ pid=5749 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:53.989010 kernel: audit: type=1101 audit(1752578573.981:560): pid=5749 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:53.989099 kernel: audit: type=1103 audit(1752578573.982:561): pid=5749 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:53.991289 kernel: audit: type=1006 audit(1752578573.982:562): pid=5749 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jul 15 11:22:53.991361 kernel: audit: type=1300 audit(1752578573.982:562): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd15e3310 a2=3 a3=1 items=0 ppid=1 pid=5749 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:53.982000 audit[5749]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd15e3310 a2=3 a3=1 items=0 ppid=1 pid=5749 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:53.995868 kernel: audit: type=1327 audit(1752578573.982:562): proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:53.982000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:53.998016 systemd[1]: Started session-22.scope. Jul 15 11:22:53.998535 systemd-logind[1305]: New session 22 of user core. Jul 15 11:22:54.002000 audit[5749]: USER_START pid=5749 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:54.006862 kernel: audit: type=1105 audit(1752578574.002:563): pid=5749 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:54.015000 audit[5761]: CRED_ACQ pid=5761 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:54.020372 kernel: audit: type=1103 audit(1752578574.015:564): pid=5761 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:54.141399 sshd[5749]: pam_unix(sshd:session): session closed for user core Jul 15 11:22:54.141000 audit[5749]: USER_END pid=5749 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:54.141000 audit[5749]: CRED_DISP pid=5749 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:54.146180 systemd[1]: sshd@21-10.0.0.116:22-10.0.0.1:53372.service: Deactivated successfully. Jul 15 11:22:54.147097 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 11:22:54.147675 kernel: audit: type=1106 audit(1752578574.141:565): pid=5749 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:54.147744 kernel: audit: type=1104 audit(1752578574.141:566): pid=5749 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:54.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.116:22-10.0.0.1:53372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:54.148272 systemd-logind[1305]: Session 22 logged out. Waiting for processes to exit. Jul 15 11:22:54.149097 systemd-logind[1305]: Removed session 22. Jul 15 11:22:55.648966 kubelet[2105]: E0715 11:22:55.648916 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:22:56.730531 systemd[1]: run-containerd-runc-k8s.io-8f606f26d70f0337e48a4a0f9bf844dce1a95b1fae0faffb0ad0537d859a3e5b-runc.27t6yx.mount: Deactivated successfully. Jul 15 11:22:57.649494 kubelet[2105]: E0715 11:22:57.649450 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:22:57.749000 audit[5829]: NETFILTER_CFG table=filter:134 family=2 entries=9 op=nft_register_rule pid=5829 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:57.749000 audit[5829]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffff2ee7670 a2=0 a3=1 items=0 ppid=2215 pid=5829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:57.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:57.761000 audit[5829]: NETFILTER_CFG table=nat:135 family=2 entries=55 op=nft_register_chain pid=5829 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:22:57.761000 audit[5829]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20100 a0=3 a1=fffff2ee7670 a2=0 a3=1 items=0 ppid=2215 pid=5829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:57.761000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:22:59.144166 systemd[1]: Started sshd@22-10.0.0.116:22-10.0.0.1:53386.service. Jul 15 11:22:59.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.116:22-10.0.0.1:53386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:59.144916 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 15 11:22:59.144956 kernel: audit: type=1130 audit(1752578579.143:570): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.116:22-10.0.0.1:53386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:59.182000 audit[5830]: USER_ACCT pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:59.183689 sshd[5830]: Accepted publickey for core from 10.0.0.1 port 53386 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:22:59.185000 audit[5830]: CRED_ACQ pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:59.187107 sshd[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:22:59.188970 kernel: audit: type=1101 audit(1752578579.182:571): pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:59.189018 kernel: audit: type=1103 audit(1752578579.185:572): pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:59.189040 kernel: audit: type=1006 audit(1752578579.185:573): pid=5830 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jul 15 11:22:59.190293 kernel: audit: type=1300 audit(1752578579.185:573): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee035060 a2=3 a3=1 items=0 ppid=1 pid=5830 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:59.185000 audit[5830]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee035060 a2=3 a3=1 items=0 ppid=1 pid=5830 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:22:59.190870 systemd-logind[1305]: New session 23 of user core. Jul 15 11:22:59.191491 systemd[1]: Started session-23.scope. Jul 15 11:22:59.185000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:59.193470 kernel: audit: type=1327 audit(1752578579.185:573): proctitle=737368643A20636F7265205B707269765D Jul 15 11:22:59.194000 audit[5830]: USER_START pid=5830 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:59.195000 audit[5833]: CRED_ACQ pid=5833 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:59.199914 kernel: audit: type=1105 audit(1752578579.194:574): pid=5830 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:59.199953 kernel: audit: type=1103 audit(1752578579.195:575): pid=5833 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:59.298519 sshd[5830]: pam_unix(sshd:session): session closed for user core Jul 15 11:22:59.298000 audit[5830]: USER_END pid=5830 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:59.301136 systemd-logind[1305]: Session 23 logged out. Waiting for processes to exit. Jul 15 11:22:59.301338 systemd[1]: sshd@22-10.0.0.116:22-10.0.0.1:53386.service: Deactivated successfully. Jul 15 11:22:59.298000 audit[5830]: CRED_DISP pid=5830 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:59.302191 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 11:22:59.302555 systemd-logind[1305]: Removed session 23. Jul 15 11:22:59.304223 kernel: audit: type=1106 audit(1752578579.298:576): pid=5830 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:59.304287 kernel: audit: type=1104 audit(1752578579.298:577): pid=5830 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:22:59.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.116:22-10.0.0.1:53386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:22:59.649295 kubelet[2105]: E0715 11:22:59.649217 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:23:04.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.116:22-10.0.0.1:45586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:23:04.301729 systemd[1]: Started sshd@23-10.0.0.116:22-10.0.0.1:45586.service. Jul 15 11:23:04.304856 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 15 11:23:04.304934 kernel: audit: type=1130 audit(1752578584.301:579): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.116:22-10.0.0.1:45586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:23:04.341000 audit[5844]: USER_ACCT pid=5844 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:23:04.342565 sshd[5844]: Accepted publickey for core from 10.0.0.1 port 45586 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:23:04.344342 sshd[5844]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:23:04.341000 audit[5844]: CRED_ACQ pid=5844 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:23:04.346966 kernel: audit: type=1101 audit(1752578584.341:580): pid=5844 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:23:04.347028 kernel: audit: type=1103 audit(1752578584.341:581): pid=5844 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:23:04.347052 kernel: audit: type=1006 audit(1752578584.341:582): pid=5844 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jul 15 11:23:04.347734 systemd-logind[1305]: New session 24 of user core. Jul 15 11:23:04.348917 kernel: audit: type=1300 audit(1752578584.341:582): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe93af4c0 a2=3 a3=1 items=0 ppid=1 pid=5844 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:23:04.341000 audit[5844]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe93af4c0 a2=3 a3=1 items=0 ppid=1 pid=5844 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:23:04.348495 systemd[1]: Started session-24.scope. Jul 15 11:23:04.341000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:23:04.351644 kernel: audit: type=1327 audit(1752578584.341:582): proctitle=737368643A20636F7265205B707269765D Jul 15 11:23:04.351000 audit[5844]: USER_START pid=5844 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:23:04.351000 audit[5847]: CRED_ACQ pid=5847 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:23:04.357928 kernel: audit: type=1105 audit(1752578584.351:583): pid=5844 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:23:04.357979 kernel: audit: type=1103 audit(1752578584.351:584): pid=5847 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:23:04.504029 sshd[5844]: pam_unix(sshd:session): session closed for user core Jul 15 11:23:04.504000 audit[5844]: USER_END pid=5844 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:23:04.506722 systemd[1]: sshd@23-10.0.0.116:22-10.0.0.1:45586.service: Deactivated successfully. Jul 15 11:23:04.507553 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 11:23:04.504000 audit[5844]: CRED_DISP pid=5844 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:23:04.508383 systemd-logind[1305]: Session 24 logged out. Waiting for processes to exit. Jul 15 11:23:04.510456 kernel: audit: type=1106 audit(1752578584.504:585): pid=5844 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:23:04.510512 kernel: audit: type=1104 audit(1752578584.504:586): pid=5844 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:23:04.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.116:22-10.0.0.1:45586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:23:04.510937 systemd-logind[1305]: Removed session 24.