May 8 00:38:39.186971 kernel: Linux version 5.15.180-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Wed May 7 23:10:51 -00 2025 May 8 00:38:39.186999 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a54efb5fced97d6fa50818abcad373184ba88ccc0f58664d2cd82270befba488 May 8 00:38:39.187007 kernel: BIOS-provided physical RAM map: May 8 00:38:39.187012 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 8 00:38:39.187018 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 8 00:38:39.187023 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 8 00:38:39.187030 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 8 00:38:39.187035 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 8 00:38:39.187045 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 8 00:38:39.187050 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 8 00:38:39.187056 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 00:38:39.187061 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 8 00:38:39.187067 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 00:38:39.187072 kernel: NX (Execute Disable) protection: active May 8 00:38:39.187085 kernel: SMBIOS 2.8 present. May 8 00:38:39.187091 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 8 00:38:39.187098 kernel: Hypervisor detected: KVM May 8 00:38:39.187104 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:38:39.187109 kernel: kvm-clock: cpu 0, msr 8a198001, primary cpu clock May 8 00:38:39.187115 kernel: kvm-clock: using sched offset of 3215475603 cycles May 8 00:38:39.187122 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:38:39.187128 kernel: tsc: Detected 2794.748 MHz processor May 8 00:38:39.187134 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:38:39.187145 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:38:39.187151 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 8 00:38:39.187163 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:38:39.187170 kernel: Using GB pages for direct mapping May 8 00:38:39.187176 kernel: ACPI: Early table checksum verification disabled May 8 00:38:39.187182 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 8 00:38:39.187188 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:38:39.187197 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:38:39.187203 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:38:39.187213 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 8 00:38:39.187220 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:38:39.187226 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:38:39.187232 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:38:39.187238 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:38:39.187244 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 8 00:38:39.187250 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 8 00:38:39.187256 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 8 00:38:39.187266 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 8 00:38:39.187273 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 8 00:38:39.187279 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 8 00:38:39.187288 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 8 00:38:39.187294 kernel: No NUMA configuration found May 8 00:38:39.187301 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 8 00:38:39.187308 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 8 00:38:39.187315 kernel: Zone ranges: May 8 00:38:39.187321 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:38:39.187338 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 8 00:38:39.187344 kernel: Normal empty May 8 00:38:39.187351 kernel: Movable zone start for each node May 8 00:38:39.187357 kernel: Early memory node ranges May 8 00:38:39.187364 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 8 00:38:39.187371 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 8 00:38:39.187379 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 8 00:38:39.187388 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:38:39.187395 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 8 00:38:39.187401 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 8 00:38:39.187413 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:38:39.187420 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:38:39.187426 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:38:39.187433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:38:39.187440 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:38:39.187446 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:38:39.187455 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:38:39.187461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:38:39.187467 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:38:39.187474 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:38:39.187480 kernel: TSC deadline timer available May 8 00:38:39.187487 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 8 00:38:39.187493 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:38:39.187500 kernel: kvm-guest: setup PV sched yield May 8 00:38:39.187506 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 8 00:38:39.187514 kernel: Booting paravirtualized kernel on KVM May 8 00:38:39.187520 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:38:39.187529 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 8 00:38:39.187536 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 8 00:38:39.187542 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 8 00:38:39.187549 kernel: pcpu-alloc: [0] 0 1 2 3 May 8 00:38:39.187555 kernel: kvm-guest: setup async PF for cpu 0 May 8 00:38:39.187561 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 May 8 00:38:39.187568 kernel: kvm-guest: PV spinlocks enabled May 8 00:38:39.187576 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:38:39.187582 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 8 00:38:39.187589 kernel: Policy zone: DMA32 May 8 00:38:39.187596 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a54efb5fced97d6fa50818abcad373184ba88ccc0f58664d2cd82270befba488 May 8 00:38:39.187603 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:38:39.187610 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:38:39.187620 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:38:39.187632 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:38:39.187646 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2279K rwdata, 13724K rodata, 47464K init, 4116K bss, 134796K reserved, 0K cma-reserved) May 8 00:38:39.187653 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:38:39.187659 kernel: ftrace: allocating 34584 entries in 136 pages May 8 00:38:39.187666 kernel: ftrace: allocated 136 pages with 2 groups May 8 00:38:39.187672 kernel: rcu: Hierarchical RCU implementation. May 8 00:38:39.187689 kernel: rcu: RCU event tracing is enabled. May 8 00:38:39.187698 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:38:39.187706 kernel: Rude variant of Tasks RCU enabled. May 8 00:38:39.187714 kernel: Tracing variant of Tasks RCU enabled. May 8 00:38:39.187724 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:38:39.187732 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:38:39.187740 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 8 00:38:39.187748 kernel: random: crng init done May 8 00:38:39.187756 kernel: Console: colour VGA+ 80x25 May 8 00:38:39.187764 kernel: printk: console [ttyS0] enabled May 8 00:38:39.187772 kernel: ACPI: Core revision 20210730 May 8 00:38:39.187781 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:38:39.187788 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:38:39.187796 kernel: x2apic enabled May 8 00:38:39.187803 kernel: Switched APIC routing to physical x2apic. May 8 00:38:39.187809 kernel: kvm-guest: setup PV IPIs May 8 00:38:39.187823 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:38:39.187864 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:38:39.187874 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 8 00:38:39.187880 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:38:39.187887 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:38:39.187894 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:38:39.187908 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:38:39.187914 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:38:39.187921 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:38:39.187930 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:38:39.187936 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 8 00:38:39.187945 kernel: RETBleed: Mitigation: untrained return thunk May 8 00:38:39.187952 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:38:39.187960 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 8 00:38:39.187967 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:38:39.187975 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:38:39.187982 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:38:39.187991 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:38:39.188001 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 8 00:38:39.188010 kernel: Freeing SMP alternatives memory: 32K May 8 00:38:39.188017 kernel: pid_max: default: 32768 minimum: 301 May 8 00:38:39.188025 kernel: LSM: Security Framework initializing May 8 00:38:39.188040 kernel: SELinux: Initializing. May 8 00:38:39.188048 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:38:39.188055 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:38:39.188062 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 8 00:38:39.188069 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:38:39.188076 kernel: ... version: 0 May 8 00:38:39.188082 kernel: ... bit width: 48 May 8 00:38:39.188089 kernel: ... generic registers: 6 May 8 00:38:39.188098 kernel: ... value mask: 0000ffffffffffff May 8 00:38:39.188109 kernel: ... max period: 00007fffffffffff May 8 00:38:39.188116 kernel: ... fixed-purpose events: 0 May 8 00:38:39.188122 kernel: ... event mask: 000000000000003f May 8 00:38:39.188129 kernel: signal: max sigframe size: 1776 May 8 00:38:39.188136 kernel: rcu: Hierarchical SRCU implementation. May 8 00:38:39.188143 kernel: smp: Bringing up secondary CPUs ... May 8 00:38:39.188150 kernel: x86: Booting SMP configuration: May 8 00:38:39.188157 kernel: .... node #0, CPUs: #1 May 8 00:38:39.188166 kernel: kvm-clock: cpu 1, msr 8a198041, secondary cpu clock May 8 00:38:39.188178 kernel: kvm-guest: setup async PF for cpu 1 May 8 00:38:39.188188 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 May 8 00:38:39.188195 kernel: #2 May 8 00:38:39.188203 kernel: kvm-clock: cpu 2, msr 8a198081, secondary cpu clock May 8 00:38:39.188212 kernel: kvm-guest: setup async PF for cpu 2 May 8 00:38:39.188221 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 May 8 00:38:39.188228 kernel: #3 May 8 00:38:39.188235 kernel: kvm-clock: cpu 3, msr 8a1980c1, secondary cpu clock May 8 00:38:39.188242 kernel: kvm-guest: setup async PF for cpu 3 May 8 00:38:39.188248 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 May 8 00:38:39.188257 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:38:39.188264 kernel: smpboot: Max logical packages: 1 May 8 00:38:39.188270 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 8 00:38:39.188277 kernel: devtmpfs: initialized May 8 00:38:39.188284 kernel: x86/mm: Memory block size: 128MB May 8 00:38:39.188291 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:38:39.188298 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:38:39.188305 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:38:39.188312 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:38:39.188322 kernel: audit: initializing netlink subsys (disabled) May 8 00:38:39.188329 kernel: audit: type=2000 audit(1746664717.779:1): state=initialized audit_enabled=0 res=1 May 8 00:38:39.188336 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:38:39.188343 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:38:39.188350 kernel: cpuidle: using governor menu May 8 00:38:39.188356 kernel: ACPI: bus type PCI registered May 8 00:38:39.188363 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:38:39.188370 kernel: dca service started, version 1.12.1 May 8 00:38:39.188377 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 8 00:38:39.188386 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 8 00:38:39.188393 kernel: PCI: Using configuration type 1 for base access May 8 00:38:39.188400 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:38:39.188407 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:38:39.188414 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:38:39.188420 kernel: ACPI: Added _OSI(Module Device) May 8 00:38:39.188427 kernel: ACPI: Added _OSI(Processor Device) May 8 00:38:39.188434 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:38:39.188440 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:38:39.188448 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 8 00:38:39.188455 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 8 00:38:39.188462 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 8 00:38:39.188469 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:38:39.188476 kernel: ACPI: Interpreter enabled May 8 00:38:39.188482 kernel: ACPI: PM: (supports S0 S3 S5) May 8 00:38:39.188489 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:38:39.188498 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:38:39.188507 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:38:39.188518 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:38:39.188696 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:38:39.188795 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:38:39.188917 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:38:39.188931 kernel: PCI host bridge to bus 0000:00 May 8 00:38:39.189018 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:38:39.189149 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:38:39.189254 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:38:39.189323 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 8 00:38:39.189417 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:38:39.189502 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 8 00:38:39.189572 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:38:39.189726 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:38:39.189891 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:38:39.189977 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 8 00:38:39.190060 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 8 00:38:39.190152 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 8 00:38:39.190237 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:38:39.190319 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:38:39.190397 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 8 00:38:39.190478 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 8 00:38:39.190552 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 8 00:38:39.190636 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 8 00:38:39.190723 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 8 00:38:39.190797 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 8 00:38:39.190902 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 8 00:38:39.190988 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 8 00:38:39.191074 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 8 00:38:39.191146 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 8 00:38:39.191218 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 8 00:38:39.191292 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 8 00:38:39.191377 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:38:39.191458 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:38:39.191539 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:38:39.191615 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 8 00:38:39.191703 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 8 00:38:39.191796 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:38:39.191920 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 8 00:38:39.191931 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:38:39.191939 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:38:39.191946 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:38:39.191956 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:38:39.191963 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:38:39.191970 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:38:39.191977 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:38:39.191984 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:38:39.191991 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:38:39.191998 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:38:39.192005 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:38:39.192012 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:38:39.192020 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:38:39.192027 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:38:39.192034 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:38:39.192041 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:38:39.192048 kernel: iommu: Default domain type: Translated May 8 00:38:39.192055 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:38:39.192175 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:38:39.192255 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:38:39.192341 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:38:39.192351 kernel: vgaarb: loaded May 8 00:38:39.192358 kernel: pps_core: LinuxPPS API ver. 1 registered May 8 00:38:39.192366 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 8 00:38:39.192373 kernel: PTP clock support registered May 8 00:38:39.192380 kernel: PCI: Using ACPI for IRQ routing May 8 00:38:39.192387 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:38:39.192394 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 8 00:38:39.192401 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 8 00:38:39.192410 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:38:39.192417 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:38:39.192424 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:38:39.192431 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:38:39.192439 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:38:39.192446 kernel: pnp: PnP ACPI init May 8 00:38:39.192529 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 8 00:38:39.192549 kernel: pnp: PnP ACPI: found 6 devices May 8 00:38:39.192558 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:38:39.192570 kernel: NET: Registered PF_INET protocol family May 8 00:38:39.192579 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:38:39.192586 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:38:39.192593 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:38:39.192600 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:38:39.192607 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 8 00:38:39.192614 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:38:39.192622 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:38:39.192633 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:38:39.192642 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:38:39.192649 kernel: NET: Registered PF_XDP protocol family May 8 00:38:39.192748 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:38:39.192823 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:38:39.192961 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:38:39.193044 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 8 00:38:39.193128 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 8 00:38:39.193207 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 8 00:38:39.193222 kernel: PCI: CLS 0 bytes, default 64 May 8 00:38:39.193229 kernel: Initialise system trusted keyrings May 8 00:38:39.193236 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:38:39.193244 kernel: Key type asymmetric registered May 8 00:38:39.193251 kernel: Asymmetric key parser 'x509' registered May 8 00:38:39.193265 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 8 00:38:39.193277 kernel: io scheduler mq-deadline registered May 8 00:38:39.193285 kernel: io scheduler kyber registered May 8 00:38:39.193294 kernel: io scheduler bfq registered May 8 00:38:39.193306 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:38:39.193316 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:38:39.193325 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:38:39.193333 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 8 00:38:39.193342 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:38:39.193349 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:38:39.193356 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:38:39.193363 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:38:39.193370 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:38:39.193468 kernel: rtc_cmos 00:04: RTC can wake from S4 May 8 00:38:39.193479 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:38:39.193552 kernel: rtc_cmos 00:04: registered as rtc0 May 8 00:38:39.193653 kernel: rtc_cmos 00:04: setting system clock to 2025-05-08T00:38:38 UTC (1746664718) May 8 00:38:39.193744 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 8 00:38:39.193754 kernel: NET: Registered PF_INET6 protocol family May 8 00:38:39.193761 kernel: Segment Routing with IPv6 May 8 00:38:39.193769 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:38:39.193783 kernel: NET: Registered PF_PACKET protocol family May 8 00:38:39.193790 kernel: Key type dns_resolver registered May 8 00:38:39.193797 kernel: IPI shorthand broadcast: enabled May 8 00:38:39.193804 kernel: sched_clock: Marking stable (465001768, 364593596)->(903398617, -73803253) May 8 00:38:39.193811 kernel: registered taskstats version 1 May 8 00:38:39.193818 kernel: Loading compiled-in X.509 certificates May 8 00:38:39.193825 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.180-flatcar: c9ff13353458e6fa2786638fdd3dcad841d1075c' May 8 00:38:39.193832 kernel: Key type .fscrypt registered May 8 00:38:39.193861 kernel: Key type fscrypt-provisioning registered May 8 00:38:39.193870 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:38:39.193877 kernel: ima: Allocated hash algorithm: sha1 May 8 00:38:39.193884 kernel: ima: No architecture policies found May 8 00:38:39.193891 kernel: clk: Disabling unused clocks May 8 00:38:39.193898 kernel: Freeing unused kernel image (initmem) memory: 47464K May 8 00:38:39.193905 kernel: Write protecting the kernel read-only data: 28672k May 8 00:38:39.193912 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 8 00:38:39.193919 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 8 00:38:39.193930 kernel: Run /init as init process May 8 00:38:39.193938 kernel: with arguments: May 8 00:38:39.193947 kernel: /init May 8 00:38:39.193955 kernel: with environment: May 8 00:38:39.193964 kernel: HOME=/ May 8 00:38:39.193972 kernel: TERM=linux May 8 00:38:39.193978 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:38:39.193991 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 00:38:39.194001 systemd[1]: Detected virtualization kvm. May 8 00:38:39.194010 systemd[1]: Detected architecture x86-64. May 8 00:38:39.194018 systemd[1]: Running in initrd. May 8 00:38:39.194025 systemd[1]: No hostname configured, using default hostname. May 8 00:38:39.194033 systemd[1]: Hostname set to . May 8 00:38:39.194040 systemd[1]: Initializing machine ID from VM UUID. May 8 00:38:39.194048 systemd[1]: Queued start job for default target initrd.target. May 8 00:38:39.194055 systemd[1]: Started systemd-ask-password-console.path. May 8 00:38:39.194063 systemd[1]: Reached target cryptsetup.target. May 8 00:38:39.194071 systemd[1]: Reached target paths.target. May 8 00:38:39.194095 systemd[1]: Reached target slices.target. May 8 00:38:39.194108 systemd[1]: Reached target swap.target. May 8 00:38:39.194117 systemd[1]: Reached target timers.target. May 8 00:38:39.194125 systemd[1]: Listening on iscsid.socket. May 8 00:38:39.194136 systemd[1]: Listening on iscsiuio.socket. May 8 00:38:39.194144 systemd[1]: Listening on systemd-journald-audit.socket. May 8 00:38:39.194152 systemd[1]: Listening on systemd-journald-dev-log.socket. May 8 00:38:39.194159 systemd[1]: Listening on systemd-journald.socket. May 8 00:38:39.194167 systemd[1]: Listening on systemd-networkd.socket. May 8 00:38:39.194175 systemd[1]: Listening on systemd-udevd-control.socket. May 8 00:38:39.194182 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 00:38:39.194190 systemd[1]: Reached target sockets.target. May 8 00:38:39.194197 systemd[1]: Starting kmod-static-nodes.service... May 8 00:38:39.194209 systemd[1]: Finished network-cleanup.service. May 8 00:38:39.194217 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:38:39.194224 systemd[1]: Starting systemd-journald.service... May 8 00:38:39.194232 systemd[1]: Starting systemd-modules-load.service... May 8 00:38:39.194243 systemd[1]: Starting systemd-resolved.service... May 8 00:38:39.194251 systemd[1]: Starting systemd-vconsole-setup.service... May 8 00:38:39.194259 systemd[1]: Finished kmod-static-nodes.service. May 8 00:38:39.194266 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:38:39.194274 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 8 00:38:39.194286 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 8 00:38:39.194300 systemd-journald[197]: Journal started May 8 00:38:39.194346 systemd-journald[197]: Runtime Journal (/run/log/journal/f897be4b8b0746a69f70b86c79d271da) is 6.0M, max 48.5M, 42.5M free. May 8 00:38:39.186579 systemd-modules-load[198]: Inserted module 'overlay' May 8 00:38:39.232899 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:38:39.232935 kernel: Bridge firewalling registered May 8 00:38:39.232949 kernel: audit: type=1130 audit(1746664719.226:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.232962 systemd[1]: Started systemd-journald.service. May 8 00:38:39.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.222656 systemd-modules-load[198]: Inserted module 'br_netfilter' May 8 00:38:39.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.222659 systemd-resolved[199]: Positive Trust Anchors: May 8 00:38:39.222670 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:38:39.244132 kernel: audit: type=1130 audit(1746664719.233:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.244165 kernel: audit: type=1130 audit(1746664719.237:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.222719 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 00:38:39.225696 systemd-resolved[199]: Defaulting to hostname 'linux'. May 8 00:38:39.233364 systemd[1]: Started systemd-resolved.service. May 8 00:38:39.253485 kernel: SCSI subsystem initialized May 8 00:38:39.238180 systemd[1]: Reached target nss-lookup.target. May 8 00:38:39.253670 systemd[1]: Finished systemd-vconsole-setup.service. May 8 00:38:39.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.260050 systemd[1]: Starting dracut-cmdline-ask.service... May 8 00:38:39.260874 kernel: audit: type=1130 audit(1746664719.253:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.265341 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:38:39.265440 kernel: device-mapper: uevent: version 1.0.3 May 8 00:38:39.265455 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 8 00:38:39.269824 systemd-modules-load[198]: Inserted module 'dm_multipath' May 8 00:38:39.270720 systemd[1]: Finished systemd-modules-load.service. May 8 00:38:39.271639 systemd[1]: Starting systemd-sysctl.service... May 8 00:38:39.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.275860 kernel: audit: type=1130 audit(1746664719.269:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.283971 systemd[1]: Finished systemd-sysctl.service. May 8 00:38:39.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.287890 kernel: audit: type=1130 audit(1746664719.283:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.289952 systemd[1]: Finished dracut-cmdline-ask.service. May 8 00:38:39.294744 kernel: audit: type=1130 audit(1746664719.289:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.291313 systemd[1]: Starting dracut-cmdline.service... May 8 00:38:39.302781 dracut-cmdline[222]: dracut-dracut-053 May 8 00:38:39.305173 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a54efb5fced97d6fa50818abcad373184ba88ccc0f58664d2cd82270befba488 May 8 00:38:39.367881 kernel: Loading iSCSI transport class v2.0-870. May 8 00:38:39.388906 kernel: iscsi: registered transport (tcp) May 8 00:38:39.415896 kernel: iscsi: registered transport (qla4xxx) May 8 00:38:39.415995 kernel: QLogic iSCSI HBA Driver May 8 00:38:39.444565 systemd[1]: Finished dracut-cmdline.service. May 8 00:38:39.449965 kernel: audit: type=1130 audit(1746664719.444:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.446515 systemd[1]: Starting dracut-pre-udev.service... May 8 00:38:39.492892 kernel: raid6: avx2x4 gen() 27977 MB/s May 8 00:38:39.509880 kernel: raid6: avx2x4 xor() 6631 MB/s May 8 00:38:39.526881 kernel: raid6: avx2x2 gen() 28533 MB/s May 8 00:38:39.543883 kernel: raid6: avx2x2 xor() 18397 MB/s May 8 00:38:39.560882 kernel: raid6: avx2x1 gen() 23831 MB/s May 8 00:38:39.577899 kernel: raid6: avx2x1 xor() 14937 MB/s May 8 00:38:39.594900 kernel: raid6: sse2x4 gen() 14305 MB/s May 8 00:38:39.611930 kernel: raid6: sse2x4 xor() 5228 MB/s May 8 00:38:39.628918 kernel: raid6: sse2x2 gen() 13743 MB/s May 8 00:38:39.645914 kernel: raid6: sse2x2 xor() 8865 MB/s May 8 00:38:39.662913 kernel: raid6: sse2x1 gen() 11427 MB/s May 8 00:38:39.680324 kernel: raid6: sse2x1 xor() 7500 MB/s May 8 00:38:39.680403 kernel: raid6: using algorithm avx2x2 gen() 28533 MB/s May 8 00:38:39.680414 kernel: raid6: .... xor() 18397 MB/s, rmw enabled May 8 00:38:39.681065 kernel: raid6: using avx2x2 recovery algorithm May 8 00:38:39.693870 kernel: xor: automatically using best checksumming function avx May 8 00:38:39.789887 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 8 00:38:39.801706 systemd[1]: Finished dracut-pre-udev.service. May 8 00:38:39.807167 kernel: audit: type=1130 audit(1746664719.802:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.806000 audit: BPF prog-id=7 op=LOAD May 8 00:38:39.806000 audit: BPF prog-id=8 op=LOAD May 8 00:38:39.807550 systemd[1]: Starting systemd-udevd.service... May 8 00:38:39.820577 systemd-udevd[400]: Using default interface naming scheme 'v252'. May 8 00:38:39.824780 systemd[1]: Started systemd-udevd.service. May 8 00:38:39.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.829303 systemd[1]: Starting dracut-pre-trigger.service... May 8 00:38:39.838927 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation May 8 00:38:39.864415 systemd[1]: Finished dracut-pre-trigger.service. May 8 00:38:39.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.875974 systemd[1]: Starting systemd-udev-trigger.service... May 8 00:38:39.917992 systemd[1]: Finished systemd-udev-trigger.service. May 8 00:38:39.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:39.954083 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:38:39.964356 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:38:39.964378 kernel: GPT:9289727 != 19775487 May 8 00:38:39.964391 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:38:39.964403 kernel: GPT:9289727 != 19775487 May 8 00:38:39.964414 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:38:39.964426 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:38:39.964438 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:38:39.978003 kernel: libata version 3.00 loaded. May 8 00:38:39.987884 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:38:39.998368 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:38:39.998389 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:38:39.998402 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:38:39.998528 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:38:39.998644 kernel: AES CTR mode by8 optimization enabled May 8 00:38:39.998667 kernel: scsi host0: ahci May 8 00:38:39.998797 kernel: scsi host1: ahci May 8 00:38:39.998982 kernel: scsi host2: ahci May 8 00:38:39.999128 kernel: scsi host3: ahci May 8 00:38:39.999257 kernel: scsi host4: ahci May 8 00:38:39.999378 kernel: scsi host5: ahci May 8 00:38:39.999504 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 8 00:38:39.999518 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 8 00:38:39.999530 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 8 00:38:39.999554 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 8 00:38:39.999567 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 8 00:38:39.999578 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 8 00:38:40.011870 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (456) May 8 00:38:40.013121 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 8 00:38:40.053981 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 8 00:38:40.054540 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 8 00:38:40.059365 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 8 00:38:40.073891 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 00:38:40.075404 systemd[1]: Starting disk-uuid.service... May 8 00:38:40.092857 disk-uuid[532]: Primary Header is updated. May 8 00:38:40.092857 disk-uuid[532]: Secondary Entries is updated. May 8 00:38:40.092857 disk-uuid[532]: Secondary Header is updated. May 8 00:38:40.096606 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:38:40.308887 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 8 00:38:40.308976 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:38:40.311677 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:38:40.311777 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 8 00:38:40.311790 kernel: ata3.00: applying bridge limits May 8 00:38:40.312862 kernel: ata3.00: configured for UDMA/100 May 8 00:38:40.324994 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 8 00:38:40.325136 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:38:40.326877 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:38:40.327872 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:38:40.364274 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 8 00:38:40.381728 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:38:40.381740 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:38:41.117880 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:38:41.118176 disk-uuid[533]: The operation has completed successfully. May 8 00:38:41.142667 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:38:41.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:41.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:41.142745 systemd[1]: Finished disk-uuid.service. May 8 00:38:41.153165 systemd[1]: Starting verity-setup.service... May 8 00:38:41.176865 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:38:41.194574 systemd[1]: Found device dev-mapper-usr.device. May 8 00:38:41.212905 systemd[1]: Mounting sysusr-usr.mount... May 8 00:38:41.214610 systemd[1]: Finished verity-setup.service. May 8 00:38:41.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:41.277870 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 8 00:38:41.277954 systemd[1]: Mounted sysusr-usr.mount. May 8 00:38:41.284528 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 8 00:38:41.285449 systemd[1]: Starting ignition-setup.service... May 8 00:38:41.288112 systemd[1]: Starting parse-ip-for-networkd.service... May 8 00:38:41.295228 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:38:41.295271 kernel: BTRFS info (device vda6): using free space tree May 8 00:38:41.295283 kernel: BTRFS info (device vda6): has skinny extents May 8 00:38:41.304252 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:38:41.346796 systemd[1]: Finished parse-ip-for-networkd.service. May 8 00:38:41.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:41.349000 audit: BPF prog-id=9 op=LOAD May 8 00:38:41.350858 systemd[1]: Starting systemd-networkd.service... May 8 00:38:41.361506 systemd[1]: Finished ignition-setup.service. May 8 00:38:41.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:41.363072 systemd[1]: Starting ignition-fetch-offline.service... May 8 00:38:41.384394 systemd-networkd[717]: lo: Link UP May 8 00:38:41.384402 systemd-networkd[717]: lo: Gained carrier May 8 00:38:41.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:41.385124 systemd-networkd[717]: Enumeration completed May 8 00:38:41.385261 systemd[1]: Started systemd-networkd.service. May 8 00:38:41.385572 systemd-networkd[717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:38:41.386934 systemd[1]: Reached target network.target. May 8 00:38:41.388776 systemd-networkd[717]: eth0: Link UP May 8 00:38:41.388780 systemd-networkd[717]: eth0: Gained carrier May 8 00:38:41.389998 systemd[1]: Starting iscsiuio.service... May 8 00:38:41.473985 systemd[1]: Started iscsiuio.service. May 8 00:38:41.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:41.475227 systemd[1]: Starting iscsid.service... May 8 00:38:41.475376 systemd-networkd[717]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:38:41.479802 iscsid[729]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 8 00:38:41.479802 iscsid[729]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 8 00:38:41.479802 iscsid[729]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 8 00:38:41.479802 iscsid[729]: If using hardware iscsi like qla4xxx this message can be ignored. May 8 00:38:41.479802 iscsid[729]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 8 00:38:41.479802 iscsid[729]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 8 00:38:41.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:41.498130 ignition[719]: Ignition 2.14.0 May 8 00:38:41.482279 systemd[1]: Started iscsid.service. May 8 00:38:41.498139 ignition[719]: Stage: fetch-offline May 8 00:38:41.491101 systemd[1]: Starting dracut-initqueue.service... May 8 00:38:41.498211 ignition[719]: no configs at "/usr/lib/ignition/base.d" May 8 00:38:41.498221 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:38:41.498378 ignition[719]: parsed url from cmdline: "" May 8 00:38:41.498385 ignition[719]: no config URL provided May 8 00:38:41.498392 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:38:41.498401 ignition[719]: no config at "/usr/lib/ignition/user.ign" May 8 00:38:41.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:41.507686 systemd[1]: Finished dracut-initqueue.service. May 8 00:38:41.498436 ignition[719]: op(1): [started] loading QEMU firmware config module May 8 00:38:41.509442 systemd[1]: Reached target remote-fs-pre.target. May 8 00:38:41.498442 ignition[719]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:38:41.511585 systemd[1]: Reached target remote-cryptsetup.target. May 8 00:38:41.505166 ignition[719]: op(1): [finished] loading QEMU firmware config module May 8 00:38:41.512626 systemd[1]: Reached target remote-fs.target. May 8 00:38:41.514470 systemd[1]: Starting dracut-pre-mount.service... May 8 00:38:41.522738 systemd[1]: Finished dracut-pre-mount.service. May 8 00:38:41.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:41.557229 ignition[719]: parsing config with SHA512: 0567cbca00785898f3dd7abef9deab57808ca1a49f2f81006548f12a438a70886b19d69a891acc18ddaf5c3a521fd9554015e53bef685afff4aad824bb14a7cb May 8 00:38:41.596675 unknown[719]: fetched base config from "system" May 8 00:38:41.596693 unknown[719]: fetched user config from "qemu" May 8 00:38:41.597422 ignition[719]: fetch-offline: fetch-offline passed May 8 00:38:41.597490 ignition[719]: Ignition finished successfully May 8 00:38:41.601464 systemd[1]: Finished ignition-fetch-offline.service. May 8 00:38:41.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:41.601932 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:38:41.602975 systemd[1]: Starting ignition-kargs.service... May 8 00:38:41.622122 ignition[745]: Ignition 2.14.0 May 8 00:38:41.622136 ignition[745]: Stage: kargs May 8 00:38:41.622264 ignition[745]: no configs at "/usr/lib/ignition/base.d" May 8 00:38:41.622275 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:38:41.624988 ignition[745]: kargs: kargs passed May 8 00:38:41.625068 ignition[745]: Ignition finished successfully May 8 00:38:41.629594 systemd[1]: Finished ignition-kargs.service. May 8 00:38:41.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:41.631395 systemd[1]: Starting ignition-disks.service... May 8 00:38:41.709996 ignition[751]: Ignition 2.14.0 May 8 00:38:41.710011 ignition[751]: Stage: disks May 8 00:38:41.710156 ignition[751]: no configs at "/usr/lib/ignition/base.d" May 8 00:38:41.710170 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:38:41.711352 ignition[751]: disks: disks passed May 8 00:38:41.711393 ignition[751]: Ignition finished successfully May 8 00:38:41.716785 systemd[1]: Finished ignition-disks.service. May 8 00:38:41.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:41.718598 systemd[1]: Reached target initrd-root-device.target. May 8 00:38:41.719073 systemd[1]: Reached target local-fs-pre.target. May 8 00:38:41.720672 systemd[1]: Reached target local-fs.target. May 8 00:38:41.722568 systemd[1]: Reached target sysinit.target. May 8 00:38:41.723092 systemd[1]: Reached target basic.target. May 8 00:38:41.726663 systemd[1]: Starting systemd-fsck-root.service... May 8 00:38:41.744082 systemd-fsck[759]: ROOT: clean, 623/553520 files, 56023/553472 blocks May 8 00:38:41.751207 systemd[1]: Finished systemd-fsck-root.service. May 8 00:38:41.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:41.754765 systemd[1]: Mounting sysroot.mount... May 8 00:38:41.762759 systemd[1]: Mounted sysroot.mount. May 8 00:38:41.764222 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 8 00:38:41.764319 systemd[1]: Reached target initrd-root-fs.target. May 8 00:38:41.765538 systemd[1]: Mounting sysroot-usr.mount... May 8 00:38:41.766908 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 8 00:38:41.766938 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:38:41.766960 systemd[1]: Reached target ignition-diskful.target. May 8 00:38:41.769557 systemd[1]: Mounted sysroot-usr.mount. May 8 00:38:41.771237 systemd[1]: Starting initrd-setup-root.service... May 8 00:38:41.779120 initrd-setup-root[769]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:38:41.785452 initrd-setup-root[777]: cut: /sysroot/etc/group: No such file or directory May 8 00:38:41.790004 initrd-setup-root[785]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:38:41.794443 initrd-setup-root[793]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:38:41.826470 systemd[1]: Finished initrd-setup-root.service. May 8 00:38:41.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:41.829000 systemd[1]: Starting ignition-mount.service... May 8 00:38:41.831130 systemd[1]: Starting sysroot-boot.service... May 8 00:38:41.834435 bash[810]: umount: /sysroot/usr/share/oem: not mounted. May 8 00:38:41.844922 ignition[811]: INFO : Ignition 2.14.0 May 8 00:38:41.846355 ignition[811]: INFO : Stage: mount May 8 00:38:41.847402 ignition[811]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:38:41.848729 ignition[811]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:38:41.851052 systemd[1]: Finished sysroot-boot.service. May 8 00:38:41.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:41.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:41.852440 systemd[1]: Finished ignition-mount.service. May 8 00:38:41.854244 ignition[811]: INFO : mount: mount passed May 8 00:38:41.854244 ignition[811]: INFO : Ignition finished successfully May 8 00:38:42.220878 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 8 00:38:42.229866 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (820) May 8 00:38:42.232150 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:38:42.232172 kernel: BTRFS info (device vda6): using free space tree May 8 00:38:42.232185 kernel: BTRFS info (device vda6): has skinny extents May 8 00:38:42.236374 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 8 00:38:42.238299 systemd[1]: Starting ignition-files.service... May 8 00:38:42.256383 ignition[840]: INFO : Ignition 2.14.0 May 8 00:38:42.256383 ignition[840]: INFO : Stage: files May 8 00:38:42.258168 ignition[840]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:38:42.258168 ignition[840]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:38:42.260640 ignition[840]: DEBUG : files: compiled without relabeling support, skipping May 8 00:38:42.260640 ignition[840]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:38:42.260640 ignition[840]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:38:42.265029 ignition[840]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:38:42.265029 ignition[840]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:38:42.265029 ignition[840]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:38:42.265029 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:38:42.265029 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:38:42.265029 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:38:42.265029 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:38:42.261987 unknown[840]: wrote ssh authorized keys file for user: core May 8 00:38:42.340481 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:38:42.656574 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:38:42.656574 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:38:42.660963 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:38:42.660963 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:38:42.660963 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:38:42.660963 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:38:42.660963 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:38:42.660963 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:38:42.660963 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:38:42.660963 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:38:42.660963 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:38:42.660963 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:38:42.660963 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:38:42.660963 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:38:42.660963 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 8 00:38:43.011876 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:38:43.308996 systemd-networkd[717]: eth0: Gained IPv6LL May 8 00:38:43.318115 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:38:43.318115 ignition[840]: INFO : files: op(c): [started] processing unit "containerd.service" May 8 00:38:43.318115 ignition[840]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:38:43.318115 ignition[840]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:38:43.318115 ignition[840]: INFO : files: op(c): [finished] processing unit "containerd.service" May 8 00:38:43.318115 ignition[840]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 8 00:38:43.318115 ignition[840]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:38:43.318115 ignition[840]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:38:43.318115 ignition[840]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 8 00:38:43.318115 ignition[840]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 8 00:38:43.318115 ignition[840]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:38:43.318115 ignition[840]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:38:43.318115 ignition[840]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 8 00:38:43.318115 ignition[840]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:38:43.318115 ignition[840]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:38:43.363145 ignition[840]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:38:43.365008 ignition[840]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:38:43.365008 ignition[840]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" May 8 00:38:43.365008 ignition[840]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:38:43.365008 ignition[840]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:38:43.365008 ignition[840]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:38:43.365008 ignition[840]: INFO : files: files passed May 8 00:38:43.365008 ignition[840]: INFO : Ignition finished successfully May 8 00:38:43.375402 systemd[1]: Finished ignition-files.service. May 8 00:38:43.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.378172 kernel: kauditd_printk_skb: 23 callbacks suppressed May 8 00:38:43.378200 kernel: audit: type=1130 audit(1746664723.376:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.378429 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 8 00:38:43.381504 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 8 00:38:43.382335 systemd[1]: Starting ignition-quench.service... May 8 00:38:43.393937 kernel: audit: type=1130 audit(1746664723.385:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.393955 kernel: audit: type=1131 audit(1746664723.385:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.385481 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:38:43.395385 initrd-setup-root-after-ignition[865]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 8 00:38:43.385586 systemd[1]: Finished ignition-quench.service. May 8 00:38:43.399477 initrd-setup-root-after-ignition[867]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:38:43.401851 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 8 00:38:43.408046 kernel: audit: type=1130 audit(1746664723.401:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.402403 systemd[1]: Reached target ignition-complete.target. May 8 00:38:43.411609 systemd[1]: Starting initrd-parse-etc.service... May 8 00:38:43.424555 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:38:43.425793 systemd[1]: Finished initrd-parse-etc.service. May 8 00:38:43.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.427898 systemd[1]: Reached target initrd-fs.target. May 8 00:38:43.435615 kernel: audit: type=1130 audit(1746664723.426:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.435633 kernel: audit: type=1131 audit(1746664723.426:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.435610 systemd[1]: Reached target initrd.target. May 8 00:38:43.437132 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 8 00:38:43.438994 systemd[1]: Starting dracut-pre-pivot.service... May 8 00:38:43.450154 systemd[1]: Finished dracut-pre-pivot.service. May 8 00:38:43.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.452409 systemd[1]: Starting initrd-cleanup.service... May 8 00:38:43.456095 kernel: audit: type=1130 audit(1746664723.450:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.461156 systemd[1]: Stopped target nss-lookup.target. May 8 00:38:43.462787 systemd[1]: Stopped target remote-cryptsetup.target. May 8 00:38:43.464557 systemd[1]: Stopped target timers.target. May 8 00:38:43.487808 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:38:43.488793 systemd[1]: Stopped dracut-pre-pivot.service. May 8 00:38:43.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.490488 systemd[1]: Stopped target initrd.target. May 8 00:38:43.494764 kernel: audit: type=1131 audit(1746664723.489:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.494804 systemd[1]: Stopped target basic.target. May 8 00:38:43.496306 systemd[1]: Stopped target ignition-complete.target. May 8 00:38:43.498064 systemd[1]: Stopped target ignition-diskful.target. May 8 00:38:43.499789 systemd[1]: Stopped target initrd-root-device.target. May 8 00:38:43.501586 systemd[1]: Stopped target remote-fs.target. May 8 00:38:43.503176 systemd[1]: Stopped target remote-fs-pre.target. May 8 00:38:43.504853 systemd[1]: Stopped target sysinit.target. May 8 00:38:43.506379 systemd[1]: Stopped target local-fs.target. May 8 00:38:43.507929 systemd[1]: Stopped target local-fs-pre.target. May 8 00:38:43.509568 systemd[1]: Stopped target swap.target. May 8 00:38:43.511044 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:38:43.512036 systemd[1]: Stopped dracut-pre-mount.service. May 8 00:38:43.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.513710 systemd[1]: Stopped target cryptsetup.target. May 8 00:38:43.518089 kernel: audit: type=1131 audit(1746664723.512:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.518110 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:38:43.519097 systemd[1]: Stopped dracut-initqueue.service. May 8 00:38:43.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.520748 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:38:43.524588 kernel: audit: type=1131 audit(1746664723.519:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.520847 systemd[1]: Stopped ignition-fetch-offline.service. May 8 00:38:43.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.526335 systemd[1]: Stopped target paths.target. May 8 00:38:43.527813 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:38:43.532890 systemd[1]: Stopped systemd-ask-password-console.path. May 8 00:38:43.534705 systemd[1]: Stopped target slices.target. May 8 00:38:43.536237 systemd[1]: Stopped target sockets.target. May 8 00:38:43.537777 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:38:43.538986 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 8 00:38:43.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.541023 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:38:43.541107 systemd[1]: Stopped ignition-files.service. May 8 00:38:43.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.544233 systemd[1]: Stopping ignition-mount.service... May 8 00:38:43.545864 systemd[1]: Stopping iscsid.service... May 8 00:38:43.547194 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:38:43.548290 iscsid[729]: iscsid shutting down. May 8 00:38:43.548296 systemd[1]: Stopped kmod-static-nodes.service. May 8 00:38:43.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.551165 ignition[880]: INFO : Ignition 2.14.0 May 8 00:38:43.551165 ignition[880]: INFO : Stage: umount May 8 00:38:43.553605 ignition[880]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:38:43.553605 ignition[880]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:38:43.553605 ignition[880]: INFO : umount: umount passed May 8 00:38:43.553605 ignition[880]: INFO : Ignition finished successfully May 8 00:38:43.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.551376 systemd[1]: Stopping sysroot-boot.service... May 8 00:38:43.552811 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:38:43.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.553608 systemd[1]: Stopped systemd-udev-trigger.service. May 8 00:38:43.556658 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:38:43.558403 systemd[1]: Stopped dracut-pre-trigger.service. May 8 00:38:43.565927 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:38:43.567309 systemd[1]: iscsid.service: Deactivated successfully. May 8 00:38:43.568211 systemd[1]: Stopped iscsid.service. May 8 00:38:43.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.569903 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:38:43.570880 systemd[1]: Stopped ignition-mount.service. May 8 00:38:43.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.572714 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:38:43.573606 systemd[1]: Closed iscsid.socket. May 8 00:38:43.574989 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:38:43.575028 systemd[1]: Stopped ignition-disks.service. May 8 00:38:43.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.577432 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:38:43.577465 systemd[1]: Stopped ignition-kargs.service. May 8 00:38:43.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.579845 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:38:43.579879 systemd[1]: Stopped ignition-setup.service. May 8 00:38:43.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.600584 systemd[1]: Stopping iscsiuio.service... May 8 00:38:43.602126 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:38:43.603095 systemd[1]: Finished initrd-cleanup.service. May 8 00:38:43.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.604807 systemd[1]: iscsiuio.service: Deactivated successfully. May 8 00:38:43.605732 systemd[1]: Stopped iscsiuio.service. May 8 00:38:43.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.607757 systemd[1]: Stopped target network.target. May 8 00:38:43.609250 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:38:43.609283 systemd[1]: Closed iscsiuio.socket. May 8 00:38:43.632546 systemd[1]: Stopping systemd-networkd.service... May 8 00:38:43.634584 systemd[1]: Stopping systemd-resolved.service... May 8 00:38:43.637882 systemd-networkd[717]: eth0: DHCPv6 lease lost May 8 00:38:43.639160 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:38:43.640353 systemd[1]: Stopped systemd-networkd.service. May 8 00:38:43.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.642805 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:38:43.657000 audit: BPF prog-id=9 op=UNLOAD May 8 00:38:43.642872 systemd[1]: Closed systemd-networkd.socket. May 8 00:38:43.660091 systemd[1]: Stopping network-cleanup.service... May 8 00:38:43.661731 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:38:43.661780 systemd[1]: Stopped parse-ip-for-networkd.service. May 8 00:38:43.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.666359 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:38:43.666397 systemd[1]: Stopped systemd-sysctl.service. May 8 00:38:43.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.668870 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:38:43.668910 systemd[1]: Stopped systemd-modules-load.service. May 8 00:38:43.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.687406 systemd[1]: Stopping systemd-udevd.service... May 8 00:38:43.690409 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:38:43.692086 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:38:43.693115 systemd[1]: Stopped systemd-resolved.service. May 8 00:38:43.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.695459 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:38:43.696478 systemd[1]: Stopped systemd-udevd.service. May 8 00:38:43.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.699039 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:38:43.699086 systemd[1]: Closed systemd-udevd-control.socket. May 8 00:38:43.700000 audit: BPF prog-id=6 op=UNLOAD May 8 00:38:43.710356 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:38:43.710393 systemd[1]: Closed systemd-udevd-kernel.socket. May 8 00:38:43.712863 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:38:43.712904 systemd[1]: Stopped dracut-pre-udev.service. May 8 00:38:43.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.715440 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:38:43.715477 systemd[1]: Stopped dracut-cmdline.service. May 8 00:38:43.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.718322 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:38:43.718366 systemd[1]: Stopped dracut-cmdline-ask.service. May 8 00:38:43.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.722292 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 8 00:38:43.724470 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:38:43.725655 systemd[1]: Stopped systemd-vconsole-setup.service. May 8 00:38:43.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.729291 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:38:43.730325 systemd[1]: Stopped network-cleanup.service. May 8 00:38:43.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.732030 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:38:43.733136 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 8 00:38:43.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.783262 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:38:43.784189 systemd[1]: Stopped sysroot-boot.service. May 8 00:38:43.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.785678 systemd[1]: Reached target initrd-switch-root.target. May 8 00:38:43.787351 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:38:43.787386 systemd[1]: Stopped initrd-setup-root.service. May 8 00:38:43.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:43.790480 systemd[1]: Starting initrd-switch-root.service... May 8 00:38:43.796286 systemd[1]: Switching root. May 8 00:38:43.798000 audit: BPF prog-id=8 op=UNLOAD May 8 00:38:43.798000 audit: BPF prog-id=7 op=UNLOAD May 8 00:38:43.799000 audit: BPF prog-id=5 op=UNLOAD May 8 00:38:43.799000 audit: BPF prog-id=4 op=UNLOAD May 8 00:38:43.799000 audit: BPF prog-id=3 op=UNLOAD May 8 00:38:43.815763 systemd-journald[197]: Journal stopped May 8 00:38:47.981623 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). May 8 00:38:47.981682 kernel: SELinux: Class mctp_socket not defined in policy. May 8 00:38:47.981704 kernel: SELinux: Class anon_inode not defined in policy. May 8 00:38:47.981714 kernel: SELinux: the above unknown classes and permissions will be allowed May 8 00:38:47.981725 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:38:47.981741 kernel: SELinux: policy capability open_perms=1 May 8 00:38:47.981751 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:38:47.981760 kernel: SELinux: policy capability always_check_network=0 May 8 00:38:47.981770 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:38:47.981786 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:38:47.981796 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:38:47.981807 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:38:47.981818 systemd[1]: Successfully loaded SELinux policy in 42.484ms. May 8 00:38:47.981856 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.496ms. May 8 00:38:47.981874 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 00:38:47.981888 systemd[1]: Detected virtualization kvm. May 8 00:38:47.981898 systemd[1]: Detected architecture x86-64. May 8 00:38:47.981913 systemd[1]: Detected first boot. May 8 00:38:47.981928 systemd[1]: Initializing machine ID from VM UUID. May 8 00:38:47.981938 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 8 00:38:47.981948 systemd[1]: Populated /etc with preset unit settings. May 8 00:38:47.981959 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:38:47.981970 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:38:47.981982 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:38:47.981993 systemd[1]: Queued start job for default target multi-user.target. May 8 00:38:47.982003 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 8 00:38:47.982019 systemd[1]: Created slice system-addon\x2dconfig.slice. May 8 00:38:47.982030 systemd[1]: Created slice system-addon\x2drun.slice. May 8 00:38:47.982040 systemd[1]: Created slice system-getty.slice. May 8 00:38:47.982050 systemd[1]: Created slice system-modprobe.slice. May 8 00:38:47.982060 systemd[1]: Created slice system-serial\x2dgetty.slice. May 8 00:38:47.982071 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 8 00:38:47.982083 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 8 00:38:47.982097 systemd[1]: Created slice user.slice. May 8 00:38:47.982107 systemd[1]: Started systemd-ask-password-console.path. May 8 00:38:47.982122 systemd[1]: Started systemd-ask-password-wall.path. May 8 00:38:47.982133 systemd[1]: Set up automount boot.automount. May 8 00:38:47.982143 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 8 00:38:47.982153 systemd[1]: Reached target integritysetup.target. May 8 00:38:47.982163 systemd[1]: Reached target remote-cryptsetup.target. May 8 00:38:47.982174 systemd[1]: Reached target remote-fs.target. May 8 00:38:47.982184 systemd[1]: Reached target slices.target. May 8 00:38:47.982196 systemd[1]: Reached target swap.target. May 8 00:38:47.982206 systemd[1]: Reached target torcx.target. May 8 00:38:47.982221 systemd[1]: Reached target veritysetup.target. May 8 00:38:47.982231 systemd[1]: Listening on systemd-coredump.socket. May 8 00:38:47.982241 systemd[1]: Listening on systemd-initctl.socket. May 8 00:38:47.982252 systemd[1]: Listening on systemd-journald-audit.socket. May 8 00:38:47.982262 systemd[1]: Listening on systemd-journald-dev-log.socket. May 8 00:38:47.982273 systemd[1]: Listening on systemd-journald.socket. May 8 00:38:47.982283 systemd[1]: Listening on systemd-networkd.socket. May 8 00:38:47.982293 systemd[1]: Listening on systemd-udevd-control.socket. May 8 00:38:47.982304 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 00:38:47.982320 systemd[1]: Listening on systemd-userdbd.socket. May 8 00:38:47.982331 systemd[1]: Mounting dev-hugepages.mount... May 8 00:38:47.982341 systemd[1]: Mounting dev-mqueue.mount... May 8 00:38:47.982351 systemd[1]: Mounting media.mount... May 8 00:38:47.982361 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:38:47.982372 systemd[1]: Mounting sys-kernel-debug.mount... May 8 00:38:47.982382 systemd[1]: Mounting sys-kernel-tracing.mount... May 8 00:38:47.982391 systemd[1]: Mounting tmp.mount... May 8 00:38:47.982401 systemd[1]: Starting flatcar-tmpfiles.service... May 8 00:38:47.982417 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:38:47.982427 systemd[1]: Starting kmod-static-nodes.service... May 8 00:38:47.982438 systemd[1]: Starting modprobe@configfs.service... May 8 00:38:47.982458 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:38:47.982468 systemd[1]: Starting modprobe@drm.service... May 8 00:38:47.982479 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:38:47.982489 systemd[1]: Starting modprobe@fuse.service... May 8 00:38:47.982499 systemd[1]: Starting modprobe@loop.service... May 8 00:38:47.982510 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:38:47.982525 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 8 00:38:47.982536 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 8 00:38:47.982546 systemd[1]: Starting systemd-journald.service... May 8 00:38:47.982561 kernel: loop: module loaded May 8 00:38:47.982571 systemd[1]: Starting systemd-modules-load.service... May 8 00:38:47.982581 systemd[1]: Starting systemd-network-generator.service... May 8 00:38:47.982591 systemd[1]: Starting systemd-remount-fs.service... May 8 00:38:47.982601 systemd[1]: Starting systemd-udev-trigger.service... May 8 00:38:47.982611 kernel: fuse: init (API version 7.34) May 8 00:38:47.982626 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:38:47.982637 systemd[1]: Mounted dev-hugepages.mount. May 8 00:38:47.982647 systemd[1]: Mounted dev-mqueue.mount. May 8 00:38:47.982657 systemd[1]: Mounted media.mount. May 8 00:38:47.982667 systemd[1]: Mounted sys-kernel-debug.mount. May 8 00:38:47.982677 systemd[1]: Mounted sys-kernel-tracing.mount. May 8 00:38:47.982689 systemd[1]: Mounted tmp.mount. May 8 00:38:47.982704 systemd[1]: Finished kmod-static-nodes.service. May 8 00:38:47.982716 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:38:47.982730 systemd[1]: Finished modprobe@configfs.service. May 8 00:38:47.982746 systemd-journald[1022]: Journal started May 8 00:38:47.982799 systemd-journald[1022]: Runtime Journal (/run/log/journal/f897be4b8b0746a69f70b86c79d271da) is 6.0M, max 48.5M, 42.5M free. May 8 00:38:47.870000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 8 00:38:47.870000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 8 00:38:47.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.979000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 8 00:38:47.979000 audit[1022]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff1dda3cc0 a2=4000 a3=7fff1dda3d5c items=0 ppid=1 pid=1022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:38:47.979000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 8 00:38:47.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.985391 systemd[1]: Started systemd-journald.service. May 8 00:38:47.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.987095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:38:47.987247 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:38:47.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.988395 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:38:47.988545 systemd[1]: Finished modprobe@drm.service. May 8 00:38:47.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.989631 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:38:47.989764 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:38:47.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.990986 systemd[1]: Finished flatcar-tmpfiles.service. May 8 00:38:47.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.992139 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:38:47.992293 systemd[1]: Finished modprobe@fuse.service. May 8 00:38:47.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.993496 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:38:47.993662 systemd[1]: Finished modprobe@loop.service. May 8 00:38:47.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.994949 systemd[1]: Finished systemd-modules-load.service. May 8 00:38:47.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.996324 systemd[1]: Finished systemd-network-generator.service. May 8 00:38:47.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.997696 systemd[1]: Finished systemd-remount-fs.service. May 8 00:38:47.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:47.999095 systemd[1]: Reached target network-pre.target. May 8 00:38:48.001142 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 8 00:38:48.003295 systemd[1]: Mounting sys-kernel-config.mount... May 8 00:38:48.004725 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:38:48.006309 systemd[1]: Starting systemd-hwdb-update.service... May 8 00:38:48.008608 systemd[1]: Starting systemd-journal-flush.service... May 8 00:38:48.009585 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:38:48.010628 systemd[1]: Starting systemd-random-seed.service... May 8 00:38:48.011629 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:38:48.012606 systemd[1]: Starting systemd-sysctl.service... May 8 00:38:48.014533 systemd[1]: Starting systemd-sysusers.service... May 8 00:38:48.019001 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 8 00:38:48.019830 systemd-journald[1022]: Time spent on flushing to /var/log/journal/f897be4b8b0746a69f70b86c79d271da is 39.203ms for 1037 entries. May 8 00:38:48.019830 systemd-journald[1022]: System Journal (/var/log/journal/f897be4b8b0746a69f70b86c79d271da) is 8.0M, max 195.6M, 187.6M free. May 8 00:38:48.070641 systemd-journald[1022]: Received client request to flush runtime journal. May 8 00:38:48.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:48.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:48.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:48.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:48.021422 systemd[1]: Mounted sys-kernel-config.mount. May 8 00:38:48.022876 systemd[1]: Finished systemd-random-seed.service. May 8 00:38:48.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:48.024090 systemd[1]: Reached target first-boot-complete.target. May 8 00:38:48.073401 udevadm[1067]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:38:48.032604 systemd[1]: Finished systemd-sysctl.service. May 8 00:38:48.039929 systemd[1]: Finished systemd-sysusers.service. May 8 00:38:48.043647 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 8 00:38:48.061680 systemd[1]: Finished systemd-udev-trigger.service. May 8 00:38:48.064565 systemd[1]: Starting systemd-udev-settle.service... May 8 00:38:48.071967 systemd[1]: Finished systemd-journal-flush.service. May 8 00:38:48.086694 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 8 00:38:48.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:48.669028 systemd[1]: Finished systemd-hwdb-update.service. May 8 00:38:48.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:48.671112 kernel: kauditd_printk_skb: 76 callbacks suppressed May 8 00:38:48.671153 kernel: audit: type=1130 audit(1746664728.669:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:48.671599 systemd[1]: Starting systemd-udevd.service... May 8 00:38:48.691926 systemd-udevd[1073]: Using default interface naming scheme 'v252'. May 8 00:38:48.706826 systemd[1]: Started systemd-udevd.service. May 8 00:38:48.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:48.709896 systemd[1]: Starting systemd-networkd.service... May 8 00:38:48.714469 kernel: audit: type=1130 audit(1746664728.706:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:48.715522 systemd[1]: Starting systemd-userdbd.service... May 8 00:38:48.765068 systemd[1]: Started systemd-userdbd.service. May 8 00:38:48.770852 kernel: audit: type=1130 audit(1746664728.766:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:48.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:48.770464 systemd[1]: Found device dev-ttyS0.device. May 8 00:38:48.786993 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 00:38:48.792870 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:38:48.801871 kernel: ACPI: button: Power Button [PWRF] May 8 00:38:48.823279 systemd-networkd[1082]: lo: Link UP May 8 00:38:48.823291 systemd-networkd[1082]: lo: Gained carrier May 8 00:38:48.823802 systemd-networkd[1082]: Enumeration completed May 8 00:38:48.823935 systemd[1]: Started systemd-networkd.service. May 8 00:38:48.824624 systemd-networkd[1082]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:38:48.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:48.826015 systemd-networkd[1082]: eth0: Link UP May 8 00:38:48.826100 systemd-networkd[1082]: eth0: Gained carrier May 8 00:38:48.829863 kernel: audit: type=1130 audit(1746664728.824:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:48.818000 audit[1097]: AVC avc: denied { confidentiality } for pid=1097 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 8 00:38:48.818000 audit[1097]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=564cf78184b0 a1=338ac a2=7f3eb398abc5 a3=5 items=110 ppid=1073 pid=1097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:38:48.846896 kernel: audit: type=1400 audit(1746664728.818:115): avc: denied { confidentiality } for pid=1097 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 8 00:38:48.846977 kernel: audit: type=1300 audit(1746664728.818:115): arch=c000003e syscall=175 success=yes exit=0 a0=564cf78184b0 a1=338ac a2=7f3eb398abc5 a3=5 items=110 ppid=1073 pid=1097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:38:48.818000 audit: CWD cwd="/" May 8 00:38:48.848233 kernel: audit: type=1307 audit(1746664728.818:115): cwd="/" May 8 00:38:48.818000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.851911 kernel: audit: type=1302 audit(1746664728.818:115): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=1 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.856348 kernel: audit: type=1302 audit(1746664728.818:115): item=1 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.856267 systemd-networkd[1082]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:38:48.860361 kernel: audit: type=1302 audit(1746664728.818:115): item=2 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=2 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=3 name=(null) inode=14514 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=4 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=5 name=(null) inode=14515 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=6 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=7 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=8 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=9 name=(null) inode=14517 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=10 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=11 name=(null) inode=14518 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=12 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=13 name=(null) inode=14519 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=14 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=15 name=(null) inode=14520 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=16 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=17 name=(null) inode=14521 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=18 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=19 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=20 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=21 name=(null) inode=14523 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=22 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=23 name=(null) inode=14524 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=24 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=25 name=(null) inode=14525 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=26 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=27 name=(null) inode=14526 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=28 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=29 name=(null) inode=14527 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=30 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=31 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=32 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=33 name=(null) inode=14529 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=34 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=35 name=(null) inode=14530 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=36 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=37 name=(null) inode=14531 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=38 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=39 name=(null) inode=14532 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=40 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=41 name=(null) inode=14533 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=42 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=43 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=44 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=45 name=(null) inode=14535 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=46 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=47 name=(null) inode=14536 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=48 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=49 name=(null) inode=14537 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=50 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=51 name=(null) inode=14538 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=52 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=53 name=(null) inode=14539 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=55 name=(null) inode=14540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=56 name=(null) inode=14540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=57 name=(null) inode=14541 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=58 name=(null) inode=14540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=59 name=(null) inode=14542 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=60 name=(null) inode=14540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=61 name=(null) inode=14543 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=62 name=(null) inode=14543 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=63 name=(null) inode=14544 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=64 name=(null) inode=14543 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=65 name=(null) inode=14545 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=66 name=(null) inode=14543 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=67 name=(null) inode=14546 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.863870 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:38:48.870429 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:38:48.870857 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:38:48.818000 audit: PATH item=68 name=(null) inode=14543 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=69 name=(null) inode=14547 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=70 name=(null) inode=14543 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=71 name=(null) inode=14548 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=72 name=(null) inode=14540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=73 name=(null) inode=14549 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=74 name=(null) inode=14549 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=75 name=(null) inode=14550 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=76 name=(null) inode=14549 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=77 name=(null) inode=14551 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=78 name=(null) inode=14549 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=79 name=(null) inode=14552 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=80 name=(null) inode=14549 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=81 name=(null) inode=14553 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=82 name=(null) inode=14549 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=83 name=(null) inode=14554 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=84 name=(null) inode=14540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=85 name=(null) inode=14555 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=86 name=(null) inode=14555 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=87 name=(null) inode=14556 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=88 name=(null) inode=14555 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=89 name=(null) inode=14557 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=90 name=(null) inode=14555 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=91 name=(null) inode=14558 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=92 name=(null) inode=14555 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=93 name=(null) inode=14559 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=94 name=(null) inode=14555 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=95 name=(null) inode=14560 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=96 name=(null) inode=14540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=97 name=(null) inode=14561 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=98 name=(null) inode=14561 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=99 name=(null) inode=14562 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=100 name=(null) inode=14561 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.873869 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:38:48.818000 audit: PATH item=101 name=(null) inode=14563 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=102 name=(null) inode=14561 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=103 name=(null) inode=14564 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=104 name=(null) inode=14561 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=105 name=(null) inode=14565 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=106 name=(null) inode=14561 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=107 name=(null) inode=14566 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PATH item=109 name=(null) inode=14567 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:38:48.818000 audit: PROCTITLE proctitle="(udev-worker)" May 8 00:38:48.875880 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:38:48.949313 kernel: kvm: Nested Virtualization enabled May 8 00:38:48.949433 kernel: SVM: kvm: Nested Paging enabled May 8 00:38:48.949451 kernel: SVM: Virtual VMLOAD VMSAVE supported May 8 00:38:48.949956 kernel: SVM: Virtual GIF supported May 8 00:38:48.965871 kernel: EDAC MC: Ver: 3.0.0 May 8 00:38:48.991366 systemd[1]: Finished systemd-udev-settle.service. May 8 00:38:48.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:48.993622 systemd[1]: Starting lvm2-activation-early.service... May 8 00:38:49.002810 lvm[1110]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:38:49.031743 systemd[1]: Finished lvm2-activation-early.service. May 8 00:38:49.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.032807 systemd[1]: Reached target cryptsetup.target. May 8 00:38:49.034740 systemd[1]: Starting lvm2-activation.service... May 8 00:38:49.038785 lvm[1112]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:38:49.068031 systemd[1]: Finished lvm2-activation.service. May 8 00:38:49.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.069186 systemd[1]: Reached target local-fs-pre.target. May 8 00:38:49.070237 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:38:49.070277 systemd[1]: Reached target local-fs.target. May 8 00:38:49.071230 systemd[1]: Reached target machines.target. May 8 00:38:49.073714 systemd[1]: Starting ldconfig.service... May 8 00:38:49.074895 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:38:49.074936 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:38:49.076249 systemd[1]: Starting systemd-boot-update.service... May 8 00:38:49.078448 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 8 00:38:49.080906 systemd[1]: Starting systemd-machine-id-commit.service... May 8 00:38:49.083826 systemd[1]: Starting systemd-sysext.service... May 8 00:38:49.085130 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1115 (bootctl) May 8 00:38:49.086345 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 8 00:38:49.090574 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 8 00:38:49.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.094253 systemd[1]: Unmounting usr-share-oem.mount... May 8 00:38:49.098887 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 8 00:38:49.099104 systemd[1]: Unmounted usr-share-oem.mount. May 8 00:38:49.110870 kernel: loop0: detected capacity change from 0 to 210664 May 8 00:38:49.133962 systemd-fsck[1127]: fsck.fat 4.2 (2021-01-31) May 8 00:38:49.133962 systemd-fsck[1127]: /dev/vda1: 790 files, 120710/258078 clusters May 8 00:38:49.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.135755 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 8 00:38:49.185027 systemd[1]: Mounting boot.mount... May 8 00:38:49.384766 systemd[1]: Mounted boot.mount. May 8 00:38:49.396441 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:38:49.397055 systemd[1]: Finished systemd-boot-update.service. May 8 00:38:49.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.398479 systemd[1]: Finished systemd-machine-id-commit.service. May 8 00:38:49.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.401855 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:38:49.415856 kernel: loop1: detected capacity change from 0 to 210664 May 8 00:38:49.471259 (sd-sysext)[1136]: Using extensions 'kubernetes'. May 8 00:38:49.471969 (sd-sysext)[1136]: Merged extensions into '/usr'. May 8 00:38:49.489029 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:38:49.490434 systemd[1]: Mounting usr-share-oem.mount... May 8 00:38:49.491392 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:38:49.492556 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:38:49.494904 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:38:49.497201 systemd[1]: Starting modprobe@loop.service... May 8 00:38:49.498498 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:38:49.498797 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:38:49.499117 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:38:49.503506 systemd[1]: Mounted usr-share-oem.mount. May 8 00:38:49.505150 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:38:49.505490 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:38:49.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.507233 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:38:49.507413 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:38:49.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.509222 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:38:49.509396 systemd[1]: Finished modprobe@loop.service. May 8 00:38:49.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.511289 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:38:49.511434 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:38:49.513009 systemd[1]: Finished systemd-sysext.service. May 8 00:38:49.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.515623 systemd[1]: Starting ensure-sysext.service... May 8 00:38:49.517773 systemd[1]: Starting systemd-tmpfiles-setup.service... May 8 00:38:49.528916 ldconfig[1114]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:38:49.534493 systemd[1]: Finished ldconfig.service. May 8 00:38:49.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.539278 systemd[1]: Reloading. May 8 00:38:49.546002 systemd-tmpfiles[1150]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 8 00:38:49.547495 systemd-tmpfiles[1150]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:38:49.549420 systemd-tmpfiles[1150]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:38:49.630346 /usr/lib/systemd/system-generators/torcx-generator[1171]: time="2025-05-08T00:38:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:38:49.630775 /usr/lib/systemd/system-generators/torcx-generator[1171]: time="2025-05-08T00:38:49Z" level=info msg="torcx already run" May 8 00:38:49.729529 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:38:49.729551 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:38:49.747254 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:38:49.806802 systemd[1]: Finished systemd-tmpfiles-setup.service. May 8 00:38:49.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.811284 systemd[1]: Starting audit-rules.service... May 8 00:38:49.813953 systemd[1]: Starting clean-ca-certificates.service... May 8 00:38:49.816266 systemd[1]: Starting systemd-journal-catalog-update.service... May 8 00:38:49.819494 systemd[1]: Starting systemd-resolved.service... May 8 00:38:49.822665 systemd[1]: Starting systemd-timesyncd.service... May 8 00:38:49.825103 systemd[1]: Starting systemd-update-utmp.service... May 8 00:38:49.827054 systemd[1]: Finished clean-ca-certificates.service. May 8 00:38:49.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.829000 audit[1233]: SYSTEM_BOOT pid=1233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 8 00:38:49.838247 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:38:49.840160 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:38:49.842885 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:38:49.845643 systemd[1]: Starting modprobe@loop.service... May 8 00:38:49.846820 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:38:49.847122 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:38:49.847292 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:38:49.848775 systemd[1]: Finished systemd-journal-catalog-update.service. May 8 00:38:49.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.851814 systemd[1]: Finished systemd-update-utmp.service. May 8 00:38:49.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.853656 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:38:49.853902 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:38:49.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.855577 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:38:49.855799 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:38:49.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.857377 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:38:49.857762 systemd[1]: Finished modprobe@loop.service. May 8 00:38:49.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:38:49.860774 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:38:49.860963 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:38:49.862000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 8 00:38:49.862000 audit[1250]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcd8df8370 a2=420 a3=0 items=0 ppid=1220 pid=1250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:38:49.862000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 8 00:38:49.863959 augenrules[1250]: No rules May 8 00:38:49.863505 systemd[1]: Starting systemd-update-done.service... May 8 00:38:49.866234 systemd[1]: Finished audit-rules.service. May 8 00:38:49.869693 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:38:49.871687 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:38:49.875120 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:38:49.877796 systemd[1]: Starting modprobe@loop.service... May 8 00:38:49.879017 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:38:49.879168 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:38:49.879292 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:38:49.880955 systemd[1]: Finished systemd-update-done.service. May 8 00:38:49.883150 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:38:49.883368 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:38:49.885237 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:38:49.885413 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:38:49.886950 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:38:49.887115 systemd[1]: Finished modprobe@loop.service. May 8 00:38:49.889659 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:38:49.889947 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:38:49.892799 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:38:49.905122 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:38:49.907968 systemd[1]: Starting modprobe@drm.service... May 8 00:38:49.910176 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:38:49.912339 systemd[1]: Starting modprobe@loop.service... May 8 00:38:49.913448 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:38:49.913551 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:38:49.915621 systemd[1]: Starting systemd-networkd-wait-online.service... May 8 00:38:49.917197 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:38:49.918600 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:38:49.918741 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:38:49.920146 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:38:49.920424 systemd[1]: Finished modprobe@drm.service. May 8 00:38:49.921750 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:38:49.922022 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:38:49.923597 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:38:49.923847 systemd[1]: Finished modprobe@loop.service. May 8 00:38:49.925059 systemd-resolved[1229]: Positive Trust Anchors: May 8 00:38:49.925418 systemd-resolved[1229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:38:49.925537 systemd-resolved[1229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 00:38:49.925816 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:38:49.925938 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:38:49.927287 systemd[1]: Finished ensure-sysext.service. May 8 00:38:49.930566 systemd[1]: Started systemd-timesyncd.service. May 8 00:38:49.932405 systemd-timesyncd[1231]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:38:49.932590 systemd[1]: Reached target time-set.target. May 8 00:38:49.932796 systemd-timesyncd[1231]: Initial clock synchronization to Thu 2025-05-08 00:38:49.994329 UTC. May 8 00:38:49.936056 systemd-resolved[1229]: Defaulting to hostname 'linux'. May 8 00:38:49.938054 systemd[1]: Started systemd-resolved.service. May 8 00:38:49.939300 systemd[1]: Reached target network.target. May 8 00:38:49.940429 systemd[1]: Reached target nss-lookup.target. May 8 00:38:49.941652 systemd[1]: Reached target sysinit.target. May 8 00:38:49.942791 systemd[1]: Started motdgen.path. May 8 00:38:49.943898 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 8 00:38:49.945752 systemd[1]: Started logrotate.timer. May 8 00:38:49.946938 systemd[1]: Started mdadm.timer. May 8 00:38:49.947900 systemd[1]: Started systemd-tmpfiles-clean.timer. May 8 00:38:49.949148 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:38:49.949188 systemd[1]: Reached target paths.target. May 8 00:38:49.950271 systemd[1]: Reached target timers.target. May 8 00:38:49.951765 systemd[1]: Listening on dbus.socket. May 8 00:38:49.953925 systemd[1]: Starting docker.socket... May 8 00:38:49.956005 systemd[1]: Listening on sshd.socket. May 8 00:38:49.957142 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:38:49.957508 systemd[1]: Listening on docker.socket. May 8 00:38:49.958488 systemd[1]: Reached target sockets.target. May 8 00:38:49.959485 systemd[1]: Reached target basic.target. May 8 00:38:49.960590 systemd[1]: System is tainted: cgroupsv1 May 8 00:38:49.960641 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 00:38:49.960664 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 00:38:49.962059 systemd[1]: Starting containerd.service... May 8 00:38:49.964016 systemd[1]: Starting dbus.service... May 8 00:38:49.966151 systemd[1]: Starting enable-oem-cloudinit.service... May 8 00:38:49.968628 systemd[1]: Starting extend-filesystems.service... May 8 00:38:49.969813 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 8 00:38:49.972171 jq[1282]: false May 8 00:38:49.971893 systemd[1]: Starting motdgen.service... May 8 00:38:49.974336 systemd[1]: Starting prepare-helm.service... May 8 00:38:49.976342 systemd[1]: Starting ssh-key-proc-cmdline.service... May 8 00:38:49.978656 systemd[1]: Starting sshd-keygen.service... May 8 00:38:49.982625 systemd[1]: Starting systemd-logind.service... May 8 00:38:49.984163 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:38:49.984298 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:38:49.986591 dbus-daemon[1281]: [system] SELinux support is enabled May 8 00:38:50.012130 extend-filesystems[1283]: Found loop1 May 8 00:38:50.017677 extend-filesystems[1283]: Found sr0 May 8 00:38:50.017677 extend-filesystems[1283]: Found vda May 8 00:38:50.017677 extend-filesystems[1283]: Found vda1 May 8 00:38:50.017677 extend-filesystems[1283]: Found vda2 May 8 00:38:50.017677 extend-filesystems[1283]: Found vda3 May 8 00:38:50.017677 extend-filesystems[1283]: Found usr May 8 00:38:50.017677 extend-filesystems[1283]: Found vda4 May 8 00:38:50.017677 extend-filesystems[1283]: Found vda6 May 8 00:38:50.017677 extend-filesystems[1283]: Found vda7 May 8 00:38:50.017677 extend-filesystems[1283]: Found vda9 May 8 00:38:50.017677 extend-filesystems[1283]: Checking size of /dev/vda9 May 8 00:38:50.013637 systemd[1]: Starting update-engine.service... May 8 00:38:50.031222 jq[1306]: true May 8 00:38:50.016447 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 8 00:38:50.020035 systemd[1]: Started dbus.service. May 8 00:38:50.033866 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:38:50.034202 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 8 00:38:50.035778 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:38:50.036209 systemd[1]: Finished motdgen.service. May 8 00:38:50.037862 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:38:50.038119 systemd[1]: Finished ssh-key-proc-cmdline.service. May 8 00:38:50.042798 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:38:50.042868 systemd[1]: Reached target system-config.target. May 8 00:38:50.046135 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:38:50.046159 systemd[1]: Reached target user-config.target. May 8 00:38:50.058308 jq[1313]: true May 8 00:38:50.067398 tar[1310]: linux-amd64/helm May 8 00:38:50.070473 extend-filesystems[1283]: Resized partition /dev/vda9 May 8 00:38:50.089862 extend-filesystems[1320]: resize2fs 1.46.5 (30-Dec-2021) May 8 00:38:50.093987 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:38:50.143146 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:38:50.143186 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:38:50.147547 systemd-logind[1294]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:38:50.147573 systemd-logind[1294]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:38:50.148089 systemd-logind[1294]: New seat seat0. May 8 00:38:50.151304 systemd[1]: Started systemd-logind.service. May 8 00:38:50.175190 update_engine[1300]: I0508 00:38:50.174737 1300 main.cc:92] Flatcar Update Engine starting May 8 00:38:50.178199 update_engine[1300]: I0508 00:38:50.178115 1300 update_check_scheduler.cc:74] Next update check in 6m26s May 8 00:38:50.178125 systemd[1]: Started update-engine.service. May 8 00:38:50.179063 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:38:50.181429 systemd[1]: Started locksmithd.service. May 8 00:38:50.206887 extend-filesystems[1320]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:38:50.206887 extend-filesystems[1320]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:38:50.206887 extend-filesystems[1320]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:38:50.205701 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:38:50.212651 extend-filesystems[1283]: Resized filesystem in /dev/vda9 May 8 00:38:50.206013 systemd[1]: Finished extend-filesystems.service. May 8 00:38:50.246434 env[1314]: time="2025-05-08T00:38:50.246300648Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 8 00:38:50.250680 bash[1338]: Updated "/home/core/.ssh/authorized_keys" May 8 00:38:50.251084 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 8 00:38:50.270169 env[1314]: time="2025-05-08T00:38:50.270082380Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:38:50.270328 env[1314]: time="2025-05-08T00:38:50.270299171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:38:50.271753 env[1314]: time="2025-05-08T00:38:50.271712212Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.180-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:38:50.271753 env[1314]: time="2025-05-08T00:38:50.271750771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:38:50.272103 env[1314]: time="2025-05-08T00:38:50.272069435Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:38:50.272103 env[1314]: time="2025-05-08T00:38:50.272100088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:38:50.272170 env[1314]: time="2025-05-08T00:38:50.272116545Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 8 00:38:50.272170 env[1314]: time="2025-05-08T00:38:50.272129035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:38:50.272252 env[1314]: time="2025-05-08T00:38:50.272222820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:38:50.272522 env[1314]: time="2025-05-08T00:38:50.272491125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:38:50.272709 env[1314]: time="2025-05-08T00:38:50.272675082Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:38:50.272709 env[1314]: time="2025-05-08T00:38:50.272703039Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:38:50.272785 env[1314]: time="2025-05-08T00:38:50.272767020Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 8 00:38:50.272811 env[1314]: time="2025-05-08T00:38:50.272783962Z" level=info msg="metadata content store policy set" policy=shared May 8 00:38:50.340957 locksmithd[1340]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:38:50.346535 env[1314]: time="2025-05-08T00:38:50.345621163Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:38:50.346535 env[1314]: time="2025-05-08T00:38:50.345681661Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:38:50.346535 env[1314]: time="2025-05-08T00:38:50.345697381Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:38:50.346535 env[1314]: time="2025-05-08T00:38:50.345765210Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:38:50.346535 env[1314]: time="2025-05-08T00:38:50.345787786Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:38:50.346535 env[1314]: time="2025-05-08T00:38:50.345801870Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:38:50.346535 env[1314]: time="2025-05-08T00:38:50.345818610Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:38:50.346535 env[1314]: time="2025-05-08T00:38:50.345833855Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:38:50.346535 env[1314]: time="2025-05-08T00:38:50.345859066Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 8 00:38:50.346535 env[1314]: time="2025-05-08T00:38:50.345873434Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:38:50.346535 env[1314]: time="2025-05-08T00:38:50.345887337Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:38:50.346535 env[1314]: time="2025-05-08T00:38:50.345900846Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:38:50.346535 env[1314]: time="2025-05-08T00:38:50.346057493Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:38:50.346535 env[1314]: time="2025-05-08T00:38:50.346151954Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:38:50.346928 env[1314]: time="2025-05-08T00:38:50.346632426Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:38:50.346928 env[1314]: time="2025-05-08T00:38:50.346704989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:38:50.346928 env[1314]: time="2025-05-08T00:38:50.346719023Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:38:50.346928 env[1314]: time="2025-05-08T00:38:50.346784903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:38:50.346928 env[1314]: time="2025-05-08T00:38:50.346801208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:38:50.346928 env[1314]: time="2025-05-08T00:38:50.346814142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:38:50.346928 env[1314]: time="2025-05-08T00:38:50.346825661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:38:50.346928 env[1314]: time="2025-05-08T00:38:50.346849954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:38:50.346928 env[1314]: time="2025-05-08T00:38:50.346862786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:38:50.346928 env[1314]: time="2025-05-08T00:38:50.346875710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:38:50.346928 env[1314]: time="2025-05-08T00:38:50.346889290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:38:50.346928 env[1314]: time="2025-05-08T00:38:50.346907464Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:38:50.347218 env[1314]: time="2025-05-08T00:38:50.347067048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:38:50.347218 env[1314]: time="2025-05-08T00:38:50.347083263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:38:50.347218 env[1314]: time="2025-05-08T00:38:50.347095298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:38:50.347218 env[1314]: time="2025-05-08T00:38:50.347110967Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:38:50.347218 env[1314]: time="2025-05-08T00:38:50.347127364Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 8 00:38:50.347218 env[1314]: time="2025-05-08T00:38:50.347143448Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:38:50.347218 env[1314]: time="2025-05-08T00:38:50.347167931Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 8 00:38:50.347218 env[1314]: time="2025-05-08T00:38:50.347208237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:38:50.347496 env[1314]: time="2025-05-08T00:38:50.347419697Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:38:50.347496 env[1314]: time="2025-05-08T00:38:50.347477076Z" level=info msg="Connect containerd service" May 8 00:38:50.348432 env[1314]: time="2025-05-08T00:38:50.347508728Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:38:50.348432 env[1314]: time="2025-05-08T00:38:50.348124693Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:38:50.348432 env[1314]: time="2025-05-08T00:38:50.348392209Z" level=info msg="Start subscribing containerd event" May 8 00:38:50.348519 env[1314]: time="2025-05-08T00:38:50.348440137Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:38:50.350366 env[1314]: time="2025-05-08T00:38:50.349102031Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:38:50.350366 env[1314]: time="2025-05-08T00:38:50.348755097Z" level=info msg="Start recovering state" May 8 00:38:50.350366 env[1314]: time="2025-05-08T00:38:50.349258183Z" level=info msg="Start event monitor" May 8 00:38:50.350366 env[1314]: time="2025-05-08T00:38:50.349281567Z" level=info msg="Start snapshots syncer" May 8 00:38:50.350366 env[1314]: time="2025-05-08T00:38:50.349302113Z" level=info msg="Start cni network conf syncer for default" May 8 00:38:50.350366 env[1314]: time="2025-05-08T00:38:50.349312148Z" level=info msg="Start streaming server" May 8 00:38:50.349654 systemd[1]: Started containerd.service. May 8 00:38:50.351383 env[1314]: time="2025-05-08T00:38:50.351357198Z" level=info msg="containerd successfully booted in 0.164583s" May 8 00:38:50.461297 sshd_keygen[1303]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:38:50.480574 systemd[1]: Finished sshd-keygen.service. May 8 00:38:50.501998 systemd[1]: Starting issuegen.service... May 8 00:38:50.504090 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:38:50.504530 systemd[1]: Finished issuegen.service. May 8 00:38:50.507520 systemd[1]: Starting systemd-user-sessions.service... May 8 00:38:50.514231 systemd[1]: Finished systemd-user-sessions.service. May 8 00:38:50.517660 systemd[1]: Started getty@tty1.service. May 8 00:38:50.521690 systemd[1]: Started serial-getty@ttyS0.service. May 8 00:38:50.523018 systemd[1]: Reached target getty.target. May 8 00:38:50.668355 tar[1310]: linux-amd64/LICENSE May 8 00:38:50.668559 tar[1310]: linux-amd64/README.md May 8 00:38:50.673188 systemd[1]: Finished prepare-helm.service. May 8 00:38:50.733219 systemd-networkd[1082]: eth0: Gained IPv6LL May 8 00:38:50.735644 systemd[1]: Finished systemd-networkd-wait-online.service. May 8 00:38:50.737400 systemd[1]: Reached target network-online.target. May 8 00:38:50.740667 systemd[1]: Starting kubelet.service... May 8 00:38:51.553370 systemd[1]: Created slice system-sshd.slice. May 8 00:38:51.556158 systemd[1]: Started sshd@0-10.0.0.16:22-10.0.0.1:53512.service. May 8 00:38:51.618898 sshd[1379]: Accepted publickey for core from 10.0.0.1 port 53512 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:38:51.620728 sshd[1379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:38:51.632896 systemd-logind[1294]: New session 1 of user core. May 8 00:38:51.633836 systemd[1]: Created slice user-500.slice. May 8 00:38:51.673294 systemd[1]: Starting user-runtime-dir@500.service... May 8 00:38:51.687021 systemd[1]: Finished user-runtime-dir@500.service. May 8 00:38:51.690623 systemd[1]: Starting user@500.service... May 8 00:38:51.693737 (systemd)[1384]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:38:51.826586 systemd[1384]: Queued start job for default target default.target. May 8 00:38:51.826792 systemd[1384]: Reached target paths.target. May 8 00:38:51.826807 systemd[1384]: Reached target sockets.target. May 8 00:38:51.826818 systemd[1384]: Reached target timers.target. May 8 00:38:51.826829 systemd[1384]: Reached target basic.target. May 8 00:38:51.827005 systemd[1]: Started user@500.service. May 8 00:38:51.828024 systemd[1384]: Reached target default.target. May 8 00:38:51.828081 systemd[1384]: Startup finished in 128ms. May 8 00:38:51.829707 systemd[1]: Started session-1.scope. May 8 00:38:51.885829 systemd[1]: Started sshd@1-10.0.0.16:22-10.0.0.1:53522.service. May 8 00:38:51.948952 sshd[1393]: Accepted publickey for core from 10.0.0.1 port 53522 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:38:51.949380 sshd[1393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:38:51.953905 systemd-logind[1294]: New session 2 of user core. May 8 00:38:51.954657 systemd[1]: Started session-2.scope. May 8 00:38:52.042296 sshd[1393]: pam_unix(sshd:session): session closed for user core May 8 00:38:52.046092 systemd[1]: sshd@1-10.0.0.16:22-10.0.0.1:53522.service: Deactivated successfully. May 8 00:38:52.048712 systemd[1]: Started sshd@2-10.0.0.16:22-10.0.0.1:53524.service. May 8 00:38:52.049037 systemd-logind[1294]: Session 2 logged out. Waiting for processes to exit. May 8 00:38:52.051212 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:38:52.053112 systemd[1]: Started kubelet.service. May 8 00:38:52.054364 systemd[1]: Reached target multi-user.target. May 8 00:38:52.056712 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 8 00:38:52.059500 systemd-logind[1294]: Removed session 2. May 8 00:38:52.065020 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 8 00:38:52.065245 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 8 00:38:52.071715 systemd[1]: Startup finished in 5.816s (kernel) + 8.212s (userspace) = 14.028s. May 8 00:38:52.086721 sshd[1402]: Accepted publickey for core from 10.0.0.1 port 53524 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:38:52.089436 sshd[1402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:38:52.094552 systemd-logind[1294]: New session 3 of user core. May 8 00:38:52.094935 systemd[1]: Started session-3.scope. May 8 00:38:52.215638 sshd[1402]: pam_unix(sshd:session): session closed for user core May 8 00:38:52.218044 systemd[1]: sshd@2-10.0.0.16:22-10.0.0.1:53524.service: Deactivated successfully. May 8 00:38:52.219288 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:38:52.219295 systemd-logind[1294]: Session 3 logged out. Waiting for processes to exit. May 8 00:38:52.220273 systemd-logind[1294]: Removed session 3. May 8 00:38:53.037424 kubelet[1406]: E0508 00:38:53.037122 1406 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:38:53.039276 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:38:53.039436 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:39:02.190997 systemd[1]: Started sshd@3-10.0.0.16:22-10.0.0.1:48240.service. May 8 00:39:02.222499 sshd[1423]: Accepted publickey for core from 10.0.0.1 port 48240 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:39:02.223974 sshd[1423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:39:02.228207 systemd-logind[1294]: New session 4 of user core. May 8 00:39:02.229182 systemd[1]: Started session-4.scope. May 8 00:39:02.285494 sshd[1423]: pam_unix(sshd:session): session closed for user core May 8 00:39:02.288440 systemd[1]: Started sshd@4-10.0.0.16:22-10.0.0.1:48252.service. May 8 00:39:02.289010 systemd[1]: sshd@3-10.0.0.16:22-10.0.0.1:48240.service: Deactivated successfully. May 8 00:39:02.290149 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:39:02.290869 systemd-logind[1294]: Session 4 logged out. Waiting for processes to exit. May 8 00:39:02.291969 systemd-logind[1294]: Removed session 4. May 8 00:39:02.321548 sshd[1428]: Accepted publickey for core from 10.0.0.1 port 48252 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:39:02.323124 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:39:02.326780 systemd-logind[1294]: New session 5 of user core. May 8 00:39:02.327548 systemd[1]: Started session-5.scope. May 8 00:39:02.377988 sshd[1428]: pam_unix(sshd:session): session closed for user core May 8 00:39:02.380322 systemd[1]: Started sshd@5-10.0.0.16:22-10.0.0.1:48254.service. May 8 00:39:02.381469 systemd[1]: sshd@4-10.0.0.16:22-10.0.0.1:48252.service: Deactivated successfully. May 8 00:39:02.382345 systemd-logind[1294]: Session 5 logged out. Waiting for processes to exit. May 8 00:39:02.382428 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:39:02.383271 systemd-logind[1294]: Removed session 5. May 8 00:39:02.411955 sshd[1435]: Accepted publickey for core from 10.0.0.1 port 48254 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:39:02.412962 sshd[1435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:39:02.416597 systemd-logind[1294]: New session 6 of user core. May 8 00:39:02.417330 systemd[1]: Started session-6.scope. May 8 00:39:02.473476 sshd[1435]: pam_unix(sshd:session): session closed for user core May 8 00:39:02.476488 systemd[1]: Started sshd@6-10.0.0.16:22-10.0.0.1:48256.service. May 8 00:39:02.477063 systemd[1]: sshd@5-10.0.0.16:22-10.0.0.1:48254.service: Deactivated successfully. May 8 00:39:02.478134 systemd-logind[1294]: Session 6 logged out. Waiting for processes to exit. May 8 00:39:02.478212 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:39:02.479220 systemd-logind[1294]: Removed session 6. May 8 00:39:02.507489 sshd[1443]: Accepted publickey for core from 10.0.0.1 port 48256 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:39:02.508585 sshd[1443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:39:02.512110 systemd-logind[1294]: New session 7 of user core. May 8 00:39:02.512782 systemd[1]: Started session-7.scope. May 8 00:39:02.574351 sudo[1448]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:39:02.574581 sudo[1448]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 8 00:39:02.585082 dbus-daemon[1281]: Н3\x85\x9aU: received setenforce notice (enforcing=-1586001472) May 8 00:39:02.587155 sudo[1448]: pam_unix(sudo:session): session closed for user root May 8 00:39:02.589109 sshd[1443]: pam_unix(sshd:session): session closed for user core May 8 00:39:02.591642 systemd[1]: Started sshd@7-10.0.0.16:22-10.0.0.1:48268.service. May 8 00:39:02.592636 systemd[1]: sshd@6-10.0.0.16:22-10.0.0.1:48256.service: Deactivated successfully. May 8 00:39:02.593533 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:39:02.593608 systemd-logind[1294]: Session 7 logged out. Waiting for processes to exit. May 8 00:39:02.594730 systemd-logind[1294]: Removed session 7. May 8 00:39:02.624685 sshd[1450]: Accepted publickey for core from 10.0.0.1 port 48268 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:39:02.626212 sshd[1450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:39:02.629754 systemd-logind[1294]: New session 8 of user core. May 8 00:39:02.630589 systemd[1]: Started session-8.scope. May 8 00:39:02.684062 sudo[1457]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:39:02.684278 sudo[1457]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 8 00:39:02.687049 sudo[1457]: pam_unix(sudo:session): session closed for user root May 8 00:39:02.691015 sudo[1456]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 00:39:02.691191 sudo[1456]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 8 00:39:02.699412 systemd[1]: Stopping audit-rules.service... May 8 00:39:02.699000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 8 00:39:02.700641 auditctl[1460]: No rules May 8 00:39:02.700960 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:39:02.701164 systemd[1]: Stopped audit-rules.service. May 8 00:39:02.702687 systemd[1]: Starting audit-rules.service... May 8 00:39:02.725043 kernel: kauditd_printk_skb: 137 callbacks suppressed May 8 00:39:02.725097 kernel: audit: type=1305 audit(1746664742.699:143): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 8 00:39:02.699000 audit[1460]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdc65bbbb0 a2=420 a3=0 items=0 ppid=1 pid=1460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:02.734243 augenrules[1478]: No rules May 8 00:39:02.735005 systemd[1]: Finished audit-rules.service. May 8 00:39:02.757316 kernel: audit: type=1300 audit(1746664742.699:143): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdc65bbbb0 a2=420 a3=0 items=0 ppid=1 pid=1460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:02.757372 kernel: audit: type=1327 audit(1746664742.699:143): proctitle=2F7362696E2F617564697463746C002D44 May 8 00:39:02.699000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 May 8 00:39:02.757639 sudo[1456]: pam_unix(sudo:session): session closed for user root May 8 00:39:02.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:02.759339 sshd[1450]: pam_unix(sshd:session): session closed for user core May 8 00:39:02.762234 systemd[1]: Started sshd@8-10.0.0.16:22-10.0.0.1:48274.service. May 8 00:39:02.762831 systemd[1]: sshd@7-10.0.0.16:22-10.0.0.1:48268.service: Deactivated successfully. May 8 00:39:02.764527 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:39:02.764889 systemd-logind[1294]: Session 8 logged out. Waiting for processes to exit. May 8 00:39:02.766321 systemd-logind[1294]: Removed session 8. May 8 00:39:02.785879 kernel: audit: type=1131 audit(1746664742.700:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:02.785937 kernel: audit: type=1130 audit(1746664742.734:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:02.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:02.789190 kernel: audit: type=1106 audit(1746664742.756:146): pid=1456 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:39:02.756000 audit[1456]: USER_END pid=1456 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:39:02.756000 audit[1456]: CRED_DISP pid=1456 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:39:02.796667 kernel: audit: type=1104 audit(1746664742.756:147): pid=1456 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:39:02.796700 kernel: audit: type=1106 audit(1746664742.759:148): pid=1450 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:39:02.759000 audit[1450]: USER_END pid=1450 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:39:02.801197 kernel: audit: type=1104 audit(1746664742.759:149): pid=1450 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:39:02.759000 audit[1450]: CRED_DISP pid=1450 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:39:02.804940 kernel: audit: type=1130 audit(1746664742.761:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.16:22-10.0.0.1:48274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:02.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.16:22-10.0.0.1:48274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:02.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.16:22-10.0.0.1:48268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:02.814000 audit[1484]: USER_ACCT pid=1484 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:39:02.815439 sshd[1484]: Accepted publickey for core from 10.0.0.1 port 48274 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:39:02.815000 audit[1484]: CRED_ACQ pid=1484 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:39:02.815000 audit[1484]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3564fc60 a2=3 a3=0 items=0 ppid=1 pid=1484 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:02.815000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:39:02.816503 sshd[1484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:39:02.819529 systemd-logind[1294]: New session 9 of user core. May 8 00:39:02.820276 systemd[1]: Started session-9.scope. May 8 00:39:02.822000 audit[1484]: USER_START pid=1484 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:39:02.823000 audit[1488]: CRED_ACQ pid=1488 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:39:02.870000 audit[1489]: USER_ACCT pid=1489 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:39:02.871487 sudo[1489]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:39:02.870000 audit[1489]: CRED_REFR pid=1489 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:39:02.871671 sudo[1489]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 8 00:39:02.872000 audit[1489]: USER_START pid=1489 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:39:02.900808 systemd[1]: Starting docker.service... May 8 00:39:02.960791 env[1501]: time="2025-05-08T00:39:02.960729062Z" level=info msg="Starting up" May 8 00:39:02.962390 env[1501]: time="2025-05-08T00:39:02.962340331Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 8 00:39:02.962390 env[1501]: time="2025-05-08T00:39:02.962375171Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 8 00:39:02.962537 env[1501]: time="2025-05-08T00:39:02.962403900Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 8 00:39:02.962537 env[1501]: time="2025-05-08T00:39:02.962419554Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 8 00:39:02.964911 env[1501]: time="2025-05-08T00:39:02.964891998Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 8 00:39:02.965024 env[1501]: time="2025-05-08T00:39:02.965006923Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 8 00:39:02.965130 env[1501]: time="2025-05-08T00:39:02.965109094Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 8 00:39:02.965223 env[1501]: time="2025-05-08T00:39:02.965205524Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 8 00:39:03.201400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:39:03.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:03.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:03.201605 systemd[1]: Stopped kubelet.service. May 8 00:39:03.203299 systemd[1]: Starting kubelet.service... May 8 00:39:03.655864 systemd[1]: Started kubelet.service. May 8 00:39:03.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:04.439055 kubelet[1519]: E0508 00:39:04.438975 1519 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:39:04.442058 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:39:04.442265 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:39:04.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 8 00:39:04.517938 env[1501]: time="2025-05-08T00:39:04.517784187Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 8 00:39:04.517938 env[1501]: time="2025-05-08T00:39:04.517861605Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 8 00:39:04.518586 env[1501]: time="2025-05-08T00:39:04.518191436Z" level=info msg="Loading containers: start." May 8 00:39:04.567000 audit[1552]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.567000 audit[1552]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff73010d70 a2=0 a3=7fff73010d5c items=0 ppid=1501 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.567000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 8 00:39:04.569000 audit[1554]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.569000 audit[1554]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc80e7b310 a2=0 a3=7ffc80e7b2fc items=0 ppid=1501 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.569000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 May 8 00:39:04.571000 audit[1556]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.571000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc0b820ca0 a2=0 a3=7ffc0b820c8c items=0 ppid=1501 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.571000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 8 00:39:04.573000 audit[1558]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1558 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.573000 audit[1558]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe445da880 a2=0 a3=7ffe445da86c items=0 ppid=1501 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.573000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 8 00:39:04.576000 audit[1560]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.576000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe0125a1f0 a2=0 a3=7ffe0125a1dc items=0 ppid=1501 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.576000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E May 8 00:39:04.590000 audit[1565]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.590000 audit[1565]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdc3d0ee50 a2=0 a3=7ffdc3d0ee3c items=0 ppid=1501 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.590000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E May 8 00:39:04.601000 audit[1567]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.601000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff29366ea0 a2=0 a3=7fff29366e8c items=0 ppid=1501 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.601000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 May 8 00:39:04.602000 audit[1569]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.602000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffc326eb70 a2=0 a3=7fffc326eb5c items=0 ppid=1501 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.602000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E May 8 00:39:04.604000 audit[1571]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.604000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffcbf9fd430 a2=0 a3=7ffcbf9fd41c items=0 ppid=1501 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.604000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 8 00:39:04.614000 audit[1575]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.614000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffee56f2850 a2=0 a3=7ffee56f283c items=0 ppid=1501 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.614000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 8 00:39:04.619000 audit[1576]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.619000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd3af6e170 a2=0 a3=7ffd3af6e15c items=0 ppid=1501 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.619000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 8 00:39:04.630864 kernel: Initializing XFRM netlink socket May 8 00:39:04.661941 env[1501]: time="2025-05-08T00:39:04.661899618Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 8 00:39:04.680000 audit[1585]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1585 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.680000 audit[1585]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fff30bb7e80 a2=0 a3=7fff30bb7e6c items=0 ppid=1501 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.680000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 May 8 00:39:04.691000 audit[1588]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1588 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.691000 audit[1588]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fffd1dce1f0 a2=0 a3=7fffd1dce1dc items=0 ppid=1501 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.691000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E May 8 00:39:04.694000 audit[1591]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1591 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.694000 audit[1591]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffca35b8d20 a2=0 a3=7ffca35b8d0c items=0 ppid=1501 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.694000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 May 8 00:39:04.696000 audit[1593]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1593 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.696000 audit[1593]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffddd1dc430 a2=0 a3=7ffddd1dc41c items=0 ppid=1501 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.696000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 May 8 00:39:04.698000 audit[1595]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1595 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.698000 audit[1595]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffcf1911ea0 a2=0 a3=7ffcf1911e8c items=0 ppid=1501 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.698000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 May 8 00:39:04.700000 audit[1597]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1597 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.700000 audit[1597]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffe3885f6b0 a2=0 a3=7ffe3885f69c items=0 ppid=1501 pid=1597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.700000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 May 8 00:39:04.702000 audit[1599]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1599 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.702000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fff9c1e1400 a2=0 a3=7fff9c1e13ec items=0 ppid=1501 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.702000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 May 8 00:39:04.709000 audit[1602]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1602 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.709000 audit[1602]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffcadb4f110 a2=0 a3=7ffcadb4f0fc items=0 ppid=1501 pid=1602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.709000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 May 8 00:39:04.711000 audit[1604]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1604 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.711000 audit[1604]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffe887609b0 a2=0 a3=7ffe8876099c items=0 ppid=1501 pid=1604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.711000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 8 00:39:04.713000 audit[1606]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1606 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.713000 audit[1606]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffcbf1e9810 a2=0 a3=7ffcbf1e97fc items=0 ppid=1501 pid=1606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.713000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 8 00:39:04.715000 audit[1608]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1608 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.715000 audit[1608]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff25bf4bb0 a2=0 a3=7fff25bf4b9c items=0 ppid=1501 pid=1608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.715000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 May 8 00:39:04.716717 systemd-networkd[1082]: docker0: Link UP May 8 00:39:04.726000 audit[1612]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1612 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.726000 audit[1612]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffffd5b6bb0 a2=0 a3=7ffffd5b6b9c items=0 ppid=1501 pid=1612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.726000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 8 00:39:04.732000 audit[1613]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1613 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:04.732000 audit[1613]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdd52a67f0 a2=0 a3=7ffdd52a67dc items=0 ppid=1501 pid=1613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:04.732000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 8 00:39:04.734110 env[1501]: time="2025-05-08T00:39:04.734062695Z" level=info msg="Loading containers: done." May 8 00:39:04.748483 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4186891529-merged.mount: Deactivated successfully. May 8 00:39:04.753528 env[1501]: time="2025-05-08T00:39:04.753454121Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:39:04.753737 env[1501]: time="2025-05-08T00:39:04.753709664Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 8 00:39:04.753882 env[1501]: time="2025-05-08T00:39:04.753861401Z" level=info msg="Daemon has completed initialization" May 8 00:39:04.776802 systemd[1]: Started docker.service. May 8 00:39:04.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:04.782097 env[1501]: time="2025-05-08T00:39:04.782007612Z" level=info msg="API listen on /run/docker.sock" May 8 00:39:06.304288 env[1314]: time="2025-05-08T00:39:06.304209587Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 00:39:07.079468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3625784869.mount: Deactivated successfully. May 8 00:39:09.634933 env[1314]: time="2025-05-08T00:39:09.634874976Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:09.637061 env[1314]: time="2025-05-08T00:39:09.637024231Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:09.639079 env[1314]: time="2025-05-08T00:39:09.639040065Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:09.641162 env[1314]: time="2025-05-08T00:39:09.641122134Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:09.642111 env[1314]: time="2025-05-08T00:39:09.642070694Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 8 00:39:09.689863 env[1314]: time="2025-05-08T00:39:09.689808297Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 00:39:12.609208 env[1314]: time="2025-05-08T00:39:12.609135418Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:12.611084 env[1314]: time="2025-05-08T00:39:12.611013532Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:12.613501 env[1314]: time="2025-05-08T00:39:12.613467680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:12.615183 env[1314]: time="2025-05-08T00:39:12.615148632Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:12.615770 env[1314]: time="2025-05-08T00:39:12.615730502Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 8 00:39:12.631368 env[1314]: time="2025-05-08T00:39:12.631305390Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 00:39:14.451453 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:39:14.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:14.451637 systemd[1]: Stopped kubelet.service. May 8 00:39:14.452852 kernel: kauditd_printk_skb: 88 callbacks suppressed May 8 00:39:14.452905 kernel: audit: type=1130 audit(1746664754.450:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:14.453563 systemd[1]: Starting kubelet.service... May 8 00:39:14.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:14.460849 kernel: audit: type=1131 audit(1746664754.450:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:14.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:14.534891 systemd[1]: Started kubelet.service. May 8 00:39:14.545889 kernel: audit: type=1130 audit(1746664754.534:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:14.598523 kubelet[1679]: E0508 00:39:14.598441 1679 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:39:14.600536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:39:14.600698 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:39:14.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 8 00:39:14.605856 kernel: audit: type=1131 audit(1746664754.600:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 8 00:39:14.872223 env[1314]: time="2025-05-08T00:39:14.872071475Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:14.874448 env[1314]: time="2025-05-08T00:39:14.874412911Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:14.876481 env[1314]: time="2025-05-08T00:39:14.876418843Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:14.878548 env[1314]: time="2025-05-08T00:39:14.878507707Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:14.879261 env[1314]: time="2025-05-08T00:39:14.879232554Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 8 00:39:14.888974 env[1314]: time="2025-05-08T00:39:14.888936201Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:39:16.183025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3766967506.mount: Deactivated successfully. May 8 00:39:17.933958 env[1314]: time="2025-05-08T00:39:17.933832060Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:18.028403 env[1314]: time="2025-05-08T00:39:18.028313012Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:18.063047 env[1314]: time="2025-05-08T00:39:18.062982171Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:18.072934 env[1314]: time="2025-05-08T00:39:18.072894978Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:18.073431 env[1314]: time="2025-05-08T00:39:18.073402454Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 8 00:39:18.087937 env[1314]: time="2025-05-08T00:39:18.087893841Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:39:18.635607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3699803790.mount: Deactivated successfully. May 8 00:39:19.930641 env[1314]: time="2025-05-08T00:39:19.930577674Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:19.932442 env[1314]: time="2025-05-08T00:39:19.932385857Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:19.933939 env[1314]: time="2025-05-08T00:39:19.933910751Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:19.935740 env[1314]: time="2025-05-08T00:39:19.935691920Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:19.936684 env[1314]: time="2025-05-08T00:39:19.936640412Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 00:39:19.952343 env[1314]: time="2025-05-08T00:39:19.952295907Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 00:39:20.459483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3915880830.mount: Deactivated successfully. May 8 00:39:20.463765 env[1314]: time="2025-05-08T00:39:20.463698757Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:20.465614 env[1314]: time="2025-05-08T00:39:20.465575512Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:20.468280 env[1314]: time="2025-05-08T00:39:20.468231909Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:20.469827 env[1314]: time="2025-05-08T00:39:20.469778649Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:20.470165 env[1314]: time="2025-05-08T00:39:20.470131520Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 8 00:39:20.482536 env[1314]: time="2025-05-08T00:39:20.482501485Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 00:39:21.088804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount631716752.mount: Deactivated successfully. May 8 00:39:24.672701 env[1314]: time="2025-05-08T00:39:24.672622012Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:24.674759 env[1314]: time="2025-05-08T00:39:24.674709844Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:24.676853 env[1314]: time="2025-05-08T00:39:24.676794560Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:24.678957 env[1314]: time="2025-05-08T00:39:24.678907131Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:24.679781 env[1314]: time="2025-05-08T00:39:24.679743695Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 8 00:39:24.701247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 8 00:39:24.709238 kernel: audit: type=1130 audit(1746664764.700:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:24.709279 kernel: audit: type=1131 audit(1746664764.700:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:24.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:24.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:24.701464 systemd[1]: Stopped kubelet.service. May 8 00:39:24.703347 systemd[1]: Starting kubelet.service... May 8 00:39:24.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:24.783362 systemd[1]: Started kubelet.service. May 8 00:39:24.787859 kernel: audit: type=1130 audit(1746664764.782:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:24.877406 kubelet[1728]: E0508 00:39:24.877329 1728 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:39:24.879401 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:39:24.879556 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:39:24.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 8 00:39:24.885884 kernel: audit: type=1131 audit(1746664764.879:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 8 00:39:27.459326 systemd[1]: Stopped kubelet.service. May 8 00:39:27.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:27.462441 systemd[1]: Starting kubelet.service... May 8 00:39:27.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:27.466151 kernel: audit: type=1130 audit(1746664767.458:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:27.466283 kernel: audit: type=1131 audit(1746664767.458:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:27.483625 systemd[1]: Reloading. May 8 00:39:27.543155 /usr/lib/systemd/system-generators/torcx-generator[1829]: time="2025-05-08T00:39:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:39:27.543197 /usr/lib/systemd/system-generators/torcx-generator[1829]: time="2025-05-08T00:39:27Z" level=info msg="torcx already run" May 8 00:39:27.754679 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:39:27.754698 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:39:27.772194 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:27.846207 systemd[1]: Started kubelet.service. May 8 00:39:27.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:27.850871 kernel: audit: type=1130 audit(1746664767.845:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:27.851598 systemd[1]: Stopping kubelet.service... May 8 00:39:27.852545 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:39:27.852768 systemd[1]: Stopped kubelet.service. May 8 00:39:27.856866 kernel: audit: type=1131 audit(1746664767.851:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:27.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:27.854182 systemd[1]: Starting kubelet.service... May 8 00:39:27.933813 systemd[1]: Started kubelet.service. May 8 00:39:27.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:27.941864 kernel: audit: type=1130 audit(1746664767.933:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:27.976958 kubelet[1897]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:27.976958 kubelet[1897]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:39:27.976958 kubelet[1897]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:27.977389 kubelet[1897]: I0508 00:39:27.977008 1897 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:39:28.269973 kubelet[1897]: I0508 00:39:28.269925 1897 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:39:28.269973 kubelet[1897]: I0508 00:39:28.269957 1897 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:39:28.270217 kubelet[1897]: I0508 00:39:28.270197 1897 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:39:28.283448 kubelet[1897]: I0508 00:39:28.283113 1897 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:39:28.284096 kubelet[1897]: E0508 00:39:28.284072 1897 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:28.295079 kubelet[1897]: I0508 00:39:28.295027 1897 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:39:28.297716 kubelet[1897]: I0508 00:39:28.297667 1897 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:39:28.297931 kubelet[1897]: I0508 00:39:28.297707 1897 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:39:28.298083 kubelet[1897]: I0508 00:39:28.297951 1897 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:39:28.298083 kubelet[1897]: I0508 00:39:28.297964 1897 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:39:28.298170 kubelet[1897]: I0508 00:39:28.298125 1897 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:28.299059 kubelet[1897]: I0508 00:39:28.299027 1897 kubelet.go:400] "Attempting to sync node with API server" May 8 00:39:28.299059 kubelet[1897]: I0508 00:39:28.299047 1897 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:39:28.299171 kubelet[1897]: I0508 00:39:28.299086 1897 kubelet.go:312] "Adding apiserver pod source" May 8 00:39:28.299171 kubelet[1897]: I0508 00:39:28.299107 1897 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:39:28.306736 kubelet[1897]: W0508 00:39:28.306687 1897 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:28.306790 kubelet[1897]: E0508 00:39:28.306756 1897 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:28.308360 kubelet[1897]: W0508 00:39:28.308324 1897 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:28.308407 kubelet[1897]: E0508 00:39:28.308365 1897 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:28.314766 kubelet[1897]: I0508 00:39:28.314746 1897 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 8 00:39:28.320301 kubelet[1897]: I0508 00:39:28.320275 1897 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:39:28.320389 kubelet[1897]: W0508 00:39:28.320356 1897 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:39:28.321013 kubelet[1897]: I0508 00:39:28.320982 1897 server.go:1264] "Started kubelet" May 8 00:39:28.321000 audit[1897]: AVC avc: denied { mac_admin } for pid=1897 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:39:28.322225 kubelet[1897]: I0508 00:39:28.322143 1897 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 8 00:39:28.322225 kubelet[1897]: I0508 00:39:28.322179 1897 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 8 00:39:28.322327 kubelet[1897]: I0508 00:39:28.322305 1897 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:39:28.321000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:39:28.321000 audit[1897]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008dab40 a1=c000509500 a2=c0008dab10 a3=25 items=0 ppid=1 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:28.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:39:28.321000 audit[1897]: AVC avc: denied { mac_admin } for pid=1897 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:39:28.321000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:39:28.321000 audit[1897]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009b9060 a1=c000509518 a2=c0008dabd0 a3=25 items=0 ppid=1 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:28.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:39:28.323000 audit[1909]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:28.323000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffddbdc1f40 a2=0 a3=7ffddbdc1f2c items=0 ppid=1897 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:28.323000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 8 00:39:28.324000 audit[1910]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:28.324000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf3e1ec50 a2=0 a3=7ffdf3e1ec3c items=0 ppid=1897 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:28.324000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 8 00:39:28.326831 kubelet[1897]: I0508 00:39:28.325710 1897 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:39:28.326831 kubelet[1897]: I0508 00:39:28.325873 1897 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:39:28.326831 kubelet[1897]: I0508 00:39:28.325974 1897 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:39:28.326831 kubelet[1897]: I0508 00:39:28.326044 1897 reconciler.go:26] "Reconciler: start to sync state" May 8 00:39:28.326831 kubelet[1897]: W0508 00:39:28.326421 1897 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:28.326831 kubelet[1897]: E0508 00:39:28.326465 1897 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:28.326831 kubelet[1897]: I0508 00:39:28.326800 1897 server.go:455] "Adding debug handlers to kubelet server" May 8 00:39:28.327017 kernel: audit: type=1400 audit(1746664768.321:202): avc: denied { mac_admin } for pid=1897 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:39:28.327711 kubelet[1897]: I0508 00:39:28.327658 1897 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:39:28.327965 kubelet[1897]: I0508 00:39:28.327912 1897 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:39:28.328000 audit[1912]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:28.328000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcc28edf50 a2=0 a3=7ffcc28edf3c items=0 ppid=1897 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:28.328000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 8 00:39:28.329906 kubelet[1897]: E0508 00:39:28.329806 1897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="200ms" May 8 00:39:28.331016 kubelet[1897]: I0508 00:39:28.330978 1897 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:39:28.330000 audit[1914]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:28.330000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd9a58b3e0 a2=0 a3=7ffd9a58b3cc items=0 ppid=1897 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:28.330000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 8 00:39:28.332573 kubelet[1897]: I0508 00:39:28.332544 1897 factory.go:221] Registration of the containerd container factory successfully May 8 00:39:28.332573 kubelet[1897]: I0508 00:39:28.332559 1897 factory.go:221] Registration of the systemd container factory successfully May 8 00:39:28.338631 kubelet[1897]: E0508 00:39:28.338462 1897 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d665a4175d030 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:39:28.32095032 +0000 UTC m=+0.381071515,LastTimestamp:2025-05-08 00:39:28.32095032 +0000 UTC m=+0.381071515,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:39:28.338820 kubelet[1897]: E0508 00:39:28.338735 1897 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:39:28.346000 audit[1920]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:28.346000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc90c075f0 a2=0 a3=7ffc90c075dc items=0 ppid=1897 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:28.346000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 May 8 00:39:28.348008 kubelet[1897]: I0508 00:39:28.347791 1897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:39:28.347000 audit[1921]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:28.347000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdb4ab0500 a2=0 a3=7ffdb4ab04ec items=0 ppid=1897 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:28.347000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 8 00:39:28.348816 kubelet[1897]: I0508 00:39:28.348796 1897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:39:28.348944 kubelet[1897]: I0508 00:39:28.348829 1897 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:39:28.348944 kubelet[1897]: I0508 00:39:28.348877 1897 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:39:28.349042 kubelet[1897]: E0508 00:39:28.349008 1897 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:39:28.348000 audit[1922]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:28.348000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc477f0300 a2=0 a3=7ffc477f02ec items=0 ppid=1897 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:28.348000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 8 00:39:28.349975 kubelet[1897]: W0508 00:39:28.349798 1897 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:28.349000 audit[1923]: NETFILTER_CFG table=mangle:33 family=2 entries=1 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:28.349000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe481ea4b0 a2=0 a3=7ffe481ea49c items=0 ppid=1897 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:28.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 8 00:39:28.350365 kubelet[1897]: E0508 00:39:28.350083 1897 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:28.349000 audit[1924]: NETFILTER_CFG table=nat:34 family=10 entries=2 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:28.349000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fffba2c3350 a2=0 a3=7fffba2c333c items=0 ppid=1897 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:28.349000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 8 00:39:28.350000 audit[1925]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:28.350000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedc937b50 a2=0 a3=7ffedc937b3c items=0 ppid=1897 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:28.350000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 8 00:39:28.350000 audit[1926]: NETFILTER_CFG table=filter:36 family=10 entries=2 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:28.350000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffde345a510 a2=0 a3=7ffde345a4fc items=0 ppid=1897 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:28.350000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 8 00:39:28.351000 audit[1927]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=1927 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:28.351000 audit[1927]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcaef96eb0 a2=0 a3=7ffcaef96e9c items=0 ppid=1897 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:28.351000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 8 00:39:28.355503 kubelet[1897]: I0508 00:39:28.355466 1897 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:39:28.355503 kubelet[1897]: I0508 00:39:28.355487 1897 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:39:28.355603 kubelet[1897]: I0508 00:39:28.355507 1897 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:28.385496 kubelet[1897]: E0508 00:39:28.385375 1897 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d665a4175d030 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:39:28.32095032 +0000 UTC m=+0.381071515,LastTimestamp:2025-05-08 00:39:28.32095032 +0000 UTC m=+0.381071515,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:39:28.427675 kubelet[1897]: I0508 00:39:28.427637 1897 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:28.428117 kubelet[1897]: E0508 00:39:28.428075 1897 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" May 8 00:39:28.449906 kubelet[1897]: E0508 00:39:28.449869 1897 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:39:28.530988 kubelet[1897]: E0508 00:39:28.530819 1897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="400ms" May 8 00:39:28.629442 kubelet[1897]: I0508 00:39:28.629394 1897 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:28.629924 kubelet[1897]: E0508 00:39:28.629880 1897 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" May 8 00:39:28.651006 kubelet[1897]: E0508 00:39:28.650953 1897 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:39:28.802357 kubelet[1897]: I0508 00:39:28.802219 1897 policy_none.go:49] "None policy: Start" May 8 00:39:28.803107 kubelet[1897]: I0508 00:39:28.803089 1897 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:39:28.803182 kubelet[1897]: I0508 00:39:28.803115 1897 state_mem.go:35] "Initializing new in-memory state store" May 8 00:39:28.815009 kubelet[1897]: I0508 00:39:28.814957 1897 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:39:28.815009 kubelet[1897]: I0508 00:39:28.815057 1897 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 8 00:39:28.814000 audit[1897]: AVC avc: denied { mac_admin } for pid=1897 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:39:28.814000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:39:28.814000 audit[1897]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f39b00 a1=c000697b18 a2=c000f39aa0 a3=25 items=0 ppid=1 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:28.814000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:39:28.815501 kubelet[1897]: I0508 00:39:28.815217 1897 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:39:28.815501 kubelet[1897]: I0508 00:39:28.815425 1897 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:39:28.817111 kubelet[1897]: E0508 00:39:28.817089 1897 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:39:28.932562 kubelet[1897]: E0508 00:39:28.932484 1897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="800ms" May 8 00:39:29.032088 kubelet[1897]: I0508 00:39:29.032041 1897 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:29.032493 kubelet[1897]: E0508 00:39:29.032401 1897 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" May 8 00:39:29.051660 kubelet[1897]: I0508 00:39:29.051568 1897 topology_manager.go:215] "Topology Admit Handler" podUID="3c5f0345a2b418768000055d9d4e171e" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:39:29.053058 kubelet[1897]: I0508 00:39:29.052948 1897 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:39:29.054226 kubelet[1897]: I0508 00:39:29.054190 1897 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:39:29.131213 kubelet[1897]: I0508 00:39:29.131163 1897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c5f0345a2b418768000055d9d4e171e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c5f0345a2b418768000055d9d4e171e\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:29.131213 kubelet[1897]: I0508 00:39:29.131204 1897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:29.131213 kubelet[1897]: I0508 00:39:29.131224 1897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:39:29.131416 kubelet[1897]: I0508 00:39:29.131237 1897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:29.131416 kubelet[1897]: I0508 00:39:29.131250 1897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:29.131416 kubelet[1897]: I0508 00:39:29.131264 1897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:29.131416 kubelet[1897]: I0508 00:39:29.131282 1897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c5f0345a2b418768000055d9d4e171e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c5f0345a2b418768000055d9d4e171e\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:29.131416 kubelet[1897]: I0508 00:39:29.131295 1897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c5f0345a2b418768000055d9d4e171e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3c5f0345a2b418768000055d9d4e171e\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:29.131535 kubelet[1897]: I0508 00:39:29.131311 1897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:29.357501 kubelet[1897]: E0508 00:39:29.357376 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:29.357813 kubelet[1897]: E0508 00:39:29.357779 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:29.358308 env[1314]: time="2025-05-08T00:39:29.358261692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3c5f0345a2b418768000055d9d4e171e,Namespace:kube-system,Attempt:0,}" May 8 00:39:29.358956 kubelet[1897]: E0508 00:39:29.358790 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:29.359021 env[1314]: time="2025-05-08T00:39:29.358875927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 8 00:39:29.359191 env[1314]: time="2025-05-08T00:39:29.359151707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 8 00:39:29.367644 kubelet[1897]: W0508 00:39:29.367601 1897 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:29.367644 kubelet[1897]: E0508 00:39:29.367641 1897 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:29.447786 kubelet[1897]: W0508 00:39:29.447702 1897 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:29.447786 kubelet[1897]: E0508 00:39:29.447792 1897 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:29.733451 kubelet[1897]: E0508 00:39:29.733379 1897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="1.6s" May 8 00:39:29.826905 kubelet[1897]: W0508 00:39:29.826809 1897 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:29.826905 kubelet[1897]: E0508 00:39:29.826905 1897 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:29.834167 kubelet[1897]: I0508 00:39:29.834130 1897 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:29.834585 kubelet[1897]: E0508 00:39:29.834554 1897 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" May 8 00:39:29.850481 kubelet[1897]: W0508 00:39:29.850379 1897 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:29.850481 kubelet[1897]: E0508 00:39:29.850459 1897 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:29.942661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1830958543.mount: Deactivated successfully. May 8 00:39:29.948527 env[1314]: time="2025-05-08T00:39:29.948456607Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:29.950311 env[1314]: time="2025-05-08T00:39:29.950272928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:29.951373 env[1314]: time="2025-05-08T00:39:29.951305384Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:29.953231 env[1314]: time="2025-05-08T00:39:29.953201633Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:29.955079 env[1314]: time="2025-05-08T00:39:29.955019349Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:29.955975 env[1314]: time="2025-05-08T00:39:29.955921237Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:29.957315 env[1314]: time="2025-05-08T00:39:29.957286064Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:29.958556 env[1314]: time="2025-05-08T00:39:29.958528381Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:29.960132 env[1314]: time="2025-05-08T00:39:29.960097278Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:29.961926 env[1314]: time="2025-05-08T00:39:29.961894113Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:29.963431 env[1314]: time="2025-05-08T00:39:29.963383253Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:29.964061 env[1314]: time="2025-05-08T00:39:29.964029871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:30.116992 env[1314]: time="2025-05-08T00:39:30.113920829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:30.116992 env[1314]: time="2025-05-08T00:39:30.113961618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:30.116992 env[1314]: time="2025-05-08T00:39:30.113971187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:30.116992 env[1314]: time="2025-05-08T00:39:30.114161340Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/73085b757beafc2a246a40f139763376f6514fc319d4150fc58d6cf78d29ab7b pid=1938 runtime=io.containerd.runc.v2 May 8 00:39:30.128277 env[1314]: time="2025-05-08T00:39:30.128209911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:30.128277 env[1314]: time="2025-05-08T00:39:30.128269718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:30.128489 env[1314]: time="2025-05-08T00:39:30.128291010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:30.128542 env[1314]: time="2025-05-08T00:39:30.128484038Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6502d052a54071421be17190b9b9074d1c95cdd4c9c87c386947accfa8639ec pid=1958 runtime=io.containerd.runc.v2 May 8 00:39:30.135899 env[1314]: time="2025-05-08T00:39:30.135779624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:30.135899 env[1314]: time="2025-05-08T00:39:30.135869259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:30.136236 env[1314]: time="2025-05-08T00:39:30.136159157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:30.136871 env[1314]: time="2025-05-08T00:39:30.136768148Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ffdee5ea3bd6946885f2303cd109d89b466aa9a6d587c545d9fd97ff9e8c611b pid=1978 runtime=io.containerd.runc.v2 May 8 00:39:30.306895 kubelet[1897]: E0508 00:39:30.306850 1897 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.16:6443: connect: connection refused May 8 00:39:30.310513 env[1314]: time="2025-05-08T00:39:30.310345602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffdee5ea3bd6946885f2303cd109d89b466aa9a6d587c545d9fd97ff9e8c611b\"" May 8 00:39:30.312609 kubelet[1897]: E0508 00:39:30.312049 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:30.315595 env[1314]: time="2025-05-08T00:39:30.315529207Z" level=info msg="CreateContainer within sandbox \"ffdee5ea3bd6946885f2303cd109d89b466aa9a6d587c545d9fd97ff9e8c611b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:39:30.341954 env[1314]: time="2025-05-08T00:39:30.341887818Z" level=info msg="CreateContainer within sandbox \"ffdee5ea3bd6946885f2303cd109d89b466aa9a6d587c545d9fd97ff9e8c611b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"86c15ccf36de7a775a0b981761b38d20c94f9bf306317ff5bf699b7df1cf387c\"" May 8 00:39:30.344020 env[1314]: time="2025-05-08T00:39:30.343966596Z" level=info msg="StartContainer for \"86c15ccf36de7a775a0b981761b38d20c94f9bf306317ff5bf699b7df1cf387c\"" May 8 00:39:30.344540 env[1314]: time="2025-05-08T00:39:30.344509388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6502d052a54071421be17190b9b9074d1c95cdd4c9c87c386947accfa8639ec\"" May 8 00:39:30.347449 kubelet[1897]: E0508 00:39:30.347418 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:30.350627 env[1314]: time="2025-05-08T00:39:30.350577112Z" level=info msg="CreateContainer within sandbox \"e6502d052a54071421be17190b9b9074d1c95cdd4c9c87c386947accfa8639ec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:39:30.354925 env[1314]: time="2025-05-08T00:39:30.354871796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3c5f0345a2b418768000055d9d4e171e,Namespace:kube-system,Attempt:0,} returns sandbox id \"73085b757beafc2a246a40f139763376f6514fc319d4150fc58d6cf78d29ab7b\"" May 8 00:39:30.356402 kubelet[1897]: E0508 00:39:30.356370 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:30.359648 env[1314]: time="2025-05-08T00:39:30.359601211Z" level=info msg="CreateContainer within sandbox \"73085b757beafc2a246a40f139763376f6514fc319d4150fc58d6cf78d29ab7b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:39:30.380534 env[1314]: time="2025-05-08T00:39:30.379660455Z" level=info msg="CreateContainer within sandbox \"e6502d052a54071421be17190b9b9074d1c95cdd4c9c87c386947accfa8639ec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"321c95ad77591b420de99eb3c17b8c1498ba928fe89f79e09f6c886fa1ddb670\"" May 8 00:39:30.380534 env[1314]: time="2025-05-08T00:39:30.380417797Z" level=info msg="StartContainer for \"321c95ad77591b420de99eb3c17b8c1498ba928fe89f79e09f6c886fa1ddb670\"" May 8 00:39:30.381980 env[1314]: time="2025-05-08T00:39:30.381935206Z" level=info msg="CreateContainer within sandbox \"73085b757beafc2a246a40f139763376f6514fc319d4150fc58d6cf78d29ab7b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"19e312b4a0ae25812056ace2f9064615a912e32b9550599be9efb9b4e85cd09d\"" May 8 00:39:30.382548 env[1314]: time="2025-05-08T00:39:30.382509390Z" level=info msg="StartContainer for \"19e312b4a0ae25812056ace2f9064615a912e32b9550599be9efb9b4e85cd09d\"" May 8 00:39:30.482678 env[1314]: time="2025-05-08T00:39:30.482618334Z" level=info msg="StartContainer for \"86c15ccf36de7a775a0b981761b38d20c94f9bf306317ff5bf699b7df1cf387c\" returns successfully" May 8 00:39:30.534193 env[1314]: time="2025-05-08T00:39:30.533372484Z" level=info msg="StartContainer for \"321c95ad77591b420de99eb3c17b8c1498ba928fe89f79e09f6c886fa1ddb670\" returns successfully" May 8 00:39:30.552078 env[1314]: time="2025-05-08T00:39:30.551976431Z" level=info msg="StartContainer for \"19e312b4a0ae25812056ace2f9064615a912e32b9550599be9efb9b4e85cd09d\" returns successfully" May 8 00:39:31.365724 kubelet[1897]: E0508 00:39:31.365684 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:31.367360 kubelet[1897]: E0508 00:39:31.367321 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:31.370371 kubelet[1897]: E0508 00:39:31.370315 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:31.436871 kubelet[1897]: I0508 00:39:31.436808 1897 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:32.372329 kubelet[1897]: E0508 00:39:32.372286 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:32.372967 kubelet[1897]: E0508 00:39:32.372909 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:32.373154 kubelet[1897]: E0508 00:39:32.372983 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:32.440013 kubelet[1897]: E0508 00:39:32.439942 1897 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:39:32.528121 kubelet[1897]: I0508 00:39:32.528072 1897 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:39:33.304206 kubelet[1897]: I0508 00:39:33.304148 1897 apiserver.go:52] "Watching apiserver" May 8 00:39:33.326939 kubelet[1897]: I0508 00:39:33.326886 1897 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:39:34.874791 systemd[1]: Reloading. May 8 00:39:35.001706 /usr/lib/systemd/system-generators/torcx-generator[2189]: time="2025-05-08T00:39:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:39:35.001735 /usr/lib/systemd/system-generators/torcx-generator[2189]: time="2025-05-08T00:39:35Z" level=info msg="torcx already run" May 8 00:39:35.029030 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:39:35.029050 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:39:35.048562 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:35.072536 update_engine[1300]: I0508 00:39:35.072459 1300 update_attempter.cc:509] Updating boot flags... May 8 00:39:35.128448 systemd[1]: Stopping kubelet.service... May 8 00:39:35.156945 kernel: kauditd_printk_skb: 47 callbacks suppressed May 8 00:39:35.157044 kernel: audit: type=1131 audit(1746664775.150:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:35.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:35.150946 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:39:35.151673 systemd[1]: Stopped kubelet.service. May 8 00:39:35.158757 systemd[1]: Starting kubelet.service... May 8 00:39:35.265176 systemd[1]: Started kubelet.service. May 8 00:39:35.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:35.271906 kernel: audit: type=1130 audit(1746664775.265:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:35.355014 kubelet[2259]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:35.355014 kubelet[2259]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:39:35.355014 kubelet[2259]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:35.355483 kubelet[2259]: I0508 00:39:35.355049 2259 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:39:35.360090 kubelet[2259]: I0508 00:39:35.360057 2259 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:39:35.360090 kubelet[2259]: I0508 00:39:35.360084 2259 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:39:35.360270 kubelet[2259]: I0508 00:39:35.360253 2259 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:39:35.361493 kubelet[2259]: I0508 00:39:35.361468 2259 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:39:35.362916 kubelet[2259]: I0508 00:39:35.362593 2259 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:39:35.372809 kubelet[2259]: I0508 00:39:35.372788 2259 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:39:35.373578 kubelet[2259]: I0508 00:39:35.373543 2259 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:39:35.374056 kubelet[2259]: I0508 00:39:35.373695 2259 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:39:35.374391 kubelet[2259]: I0508 00:39:35.374353 2259 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:39:35.374504 kubelet[2259]: I0508 00:39:35.374489 2259 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:39:35.374673 kubelet[2259]: I0508 00:39:35.374658 2259 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:35.374911 kubelet[2259]: I0508 00:39:35.374897 2259 kubelet.go:400] "Attempting to sync node with API server" May 8 00:39:35.374990 kubelet[2259]: I0508 00:39:35.374975 2259 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:39:35.375085 kubelet[2259]: I0508 00:39:35.375069 2259 kubelet.go:312] "Adding apiserver pod source" May 8 00:39:35.375171 kubelet[2259]: I0508 00:39:35.375156 2259 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:39:35.378243 kubelet[2259]: I0508 00:39:35.376347 2259 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 8 00:39:35.378243 kubelet[2259]: I0508 00:39:35.376572 2259 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:39:35.378243 kubelet[2259]: I0508 00:39:35.377114 2259 server.go:1264] "Started kubelet" May 8 00:39:35.385295 kernel: audit: type=1400 audit(1746664775.378:219): avc: denied { mac_admin } for pid=2259 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:39:35.385441 kernel: audit: type=1401 audit(1746664775.378:219): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:39:35.378000 audit[2259]: AVC avc: denied { mac_admin } for pid=2259 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:39:35.378000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:39:35.386449 kubelet[2259]: I0508 00:39:35.380930 2259 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:39:35.386449 kubelet[2259]: I0508 00:39:35.381194 2259 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:39:35.386449 kubelet[2259]: I0508 00:39:35.381223 2259 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:39:35.386449 kubelet[2259]: E0508 00:39:35.381794 2259 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:39:35.386449 kubelet[2259]: I0508 00:39:35.382215 2259 server.go:455] "Adding debug handlers to kubelet server" May 8 00:39:35.378000 audit[2259]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b543f0 a1=c0005959c8 a2=c000b543c0 a3=25 items=0 ppid=1 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:35.392518 kubelet[2259]: I0508 00:39:35.387888 2259 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 8 00:39:35.392518 kubelet[2259]: I0508 00:39:35.387946 2259 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 8 00:39:35.392518 kubelet[2259]: I0508 00:39:35.387980 2259 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:39:35.392518 kubelet[2259]: I0508 00:39:35.390786 2259 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:39:35.392518 kubelet[2259]: I0508 00:39:35.390915 2259 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:39:35.392518 kubelet[2259]: I0508 00:39:35.391053 2259 reconciler.go:26] "Reconciler: start to sync state" May 8 00:39:35.397888 kernel: audit: type=1300 audit(1746664775.378:219): arch=c000003e syscall=188 success=no exit=-22 a0=c000b543f0 a1=c0005959c8 a2=c000b543c0 a3=25 items=0 ppid=1 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:35.398028 kernel: audit: type=1327 audit(1746664775.378:219): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:39:35.378000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:39:35.398108 kubelet[2259]: I0508 00:39:35.393643 2259 factory.go:221] Registration of the containerd container factory successfully May 8 00:39:35.398108 kubelet[2259]: I0508 00:39:35.393684 2259 factory.go:221] Registration of the systemd container factory successfully May 8 00:39:35.398108 kubelet[2259]: I0508 00:39:35.393822 2259 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:39:35.385000 audit[2259]: AVC avc: denied { mac_admin } for pid=2259 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:39:35.385000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:39:35.404316 kernel: audit: type=1400 audit(1746664775.385:220): avc: denied { mac_admin } for pid=2259 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:39:35.404387 kernel: audit: type=1401 audit(1746664775.385:220): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:39:35.385000 audit[2259]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009a4ae0 a1=c0005959e0 a2=c000b54480 a3=25 items=0 ppid=1 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:35.410111 kernel: audit: type=1300 audit(1746664775.385:220): arch=c000003e syscall=188 success=no exit=-22 a0=c0009a4ae0 a1=c0005959e0 a2=c000b54480 a3=25 items=0 ppid=1 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:35.410154 kernel: audit: type=1327 audit(1746664775.385:220): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:39:35.385000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:39:35.410194 kubelet[2259]: I0508 00:39:35.408560 2259 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:39:35.410194 kubelet[2259]: I0508 00:39:35.409588 2259 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:39:35.410194 kubelet[2259]: I0508 00:39:35.409611 2259 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:39:35.410194 kubelet[2259]: I0508 00:39:35.409630 2259 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:39:35.410194 kubelet[2259]: E0508 00:39:35.409670 2259 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:39:35.440267 kubelet[2259]: I0508 00:39:35.440230 2259 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:39:35.440267 kubelet[2259]: I0508 00:39:35.440257 2259 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:39:35.440546 kubelet[2259]: I0508 00:39:35.440284 2259 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:35.440546 kubelet[2259]: I0508 00:39:35.440452 2259 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:39:35.440546 kubelet[2259]: I0508 00:39:35.440463 2259 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:39:35.440546 kubelet[2259]: I0508 00:39:35.440483 2259 policy_none.go:49] "None policy: Start" May 8 00:39:35.441511 kubelet[2259]: I0508 00:39:35.441479 2259 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:39:35.441566 kubelet[2259]: I0508 00:39:35.441553 2259 state_mem.go:35] "Initializing new in-memory state store" May 8 00:39:35.441814 kubelet[2259]: I0508 00:39:35.441786 2259 state_mem.go:75] "Updated machine memory state" May 8 00:39:35.443238 kubelet[2259]: I0508 00:39:35.443210 2259 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:39:35.442000 audit[2259]: AVC avc: denied { mac_admin } for pid=2259 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:39:35.442000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 8 00:39:35.442000 audit[2259]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0013453e0 a1=c0013467c8 a2=c0013453b0 a3=25 items=0 ppid=1 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:35.442000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 8 00:39:35.443501 kubelet[2259]: I0508 00:39:35.443281 2259 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 8 00:39:35.443501 kubelet[2259]: I0508 00:39:35.443446 2259 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:39:35.445045 kubelet[2259]: I0508 00:39:35.445018 2259 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:39:35.510580 kubelet[2259]: I0508 00:39:35.510470 2259 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:39:35.510775 kubelet[2259]: I0508 00:39:35.510627 2259 topology_manager.go:215] "Topology Admit Handler" podUID="3c5f0345a2b418768000055d9d4e171e" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:39:35.510775 kubelet[2259]: I0508 00:39:35.510696 2259 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:39:35.549265 kubelet[2259]: I0508 00:39:35.549223 2259 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:35.592691 kubelet[2259]: I0508 00:39:35.592627 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:35.592691 kubelet[2259]: I0508 00:39:35.592682 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:35.593069 kubelet[2259]: I0508 00:39:35.592712 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:35.593069 kubelet[2259]: I0508 00:39:35.592795 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:35.593069 kubelet[2259]: I0508 00:39:35.592867 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:35.593069 kubelet[2259]: I0508 00:39:35.592913 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c5f0345a2b418768000055d9d4e171e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3c5f0345a2b418768000055d9d4e171e\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:35.593069 kubelet[2259]: I0508 00:39:35.592994 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:39:35.602307 kubelet[2259]: I0508 00:39:35.593020 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c5f0345a2b418768000055d9d4e171e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c5f0345a2b418768000055d9d4e171e\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:35.602307 kubelet[2259]: I0508 00:39:35.593041 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c5f0345a2b418768000055d9d4e171e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c5f0345a2b418768000055d9d4e171e\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:35.655134 kubelet[2259]: I0508 00:39:35.655020 2259 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 8 00:39:35.655233 kubelet[2259]: I0508 00:39:35.655146 2259 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:39:35.842618 kubelet[2259]: E0508 00:39:35.842562 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:35.842857 kubelet[2259]: E0508 00:39:35.842730 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:35.842857 kubelet[2259]: E0508 00:39:35.842780 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:36.376057 kubelet[2259]: I0508 00:39:36.376003 2259 apiserver.go:52] "Watching apiserver" May 8 00:39:36.392029 kubelet[2259]: I0508 00:39:36.391986 2259 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:39:36.421555 kubelet[2259]: E0508 00:39:36.421521 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:36.422018 kubelet[2259]: E0508 00:39:36.421974 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:36.480034 kubelet[2259]: E0508 00:39:36.479983 2259 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:39:36.480858 kubelet[2259]: E0508 00:39:36.480402 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:36.569766 kubelet[2259]: I0508 00:39:36.569686 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.569663061 podStartE2EDuration="1.569663061s" podCreationTimestamp="2025-05-08 00:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:36.569655176 +0000 UTC m=+1.256065938" watchObservedRunningTime="2025-05-08 00:39:36.569663061 +0000 UTC m=+1.256073803" May 8 00:39:36.596316 kubelet[2259]: I0508 00:39:36.596232 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5962107300000001 podStartE2EDuration="1.59621073s" podCreationTimestamp="2025-05-08 00:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:36.595712645 +0000 UTC m=+1.282123387" watchObservedRunningTime="2025-05-08 00:39:36.59621073 +0000 UTC m=+1.282621472" May 8 00:39:36.596606 kubelet[2259]: I0508 00:39:36.596353 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5963487060000001 podStartE2EDuration="1.596348706s" podCreationTimestamp="2025-05-08 00:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:36.58385631 +0000 UTC m=+1.270267062" watchObservedRunningTime="2025-05-08 00:39:36.596348706 +0000 UTC m=+1.282759458" May 8 00:39:37.423389 kubelet[2259]: E0508 00:39:37.423341 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:37.423988 kubelet[2259]: E0508 00:39:37.423953 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:38.437764 kubelet[2259]: E0508 00:39:38.437714 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:40.516868 kubelet[2259]: E0508 00:39:40.516802 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:40.797000 audit[1489]: USER_END pid=1489 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:39:40.797872 sudo[1489]: pam_unix(sudo:session): session closed for user root May 8 00:39:40.799061 kernel: kauditd_printk_skb: 4 callbacks suppressed May 8 00:39:40.799237 kernel: audit: type=1106 audit(1746664780.797:222): pid=1489 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:39:40.799819 sshd[1484]: pam_unix(sshd:session): session closed for user core May 8 00:39:40.802245 systemd[1]: sshd@8-10.0.0.16:22-10.0.0.1:48274.service: Deactivated successfully. May 8 00:39:40.797000 audit[1489]: CRED_DISP pid=1489 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:39:40.803386 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:39:40.803425 systemd-logind[1294]: Session 9 logged out. Waiting for processes to exit. May 8 00:39:40.804450 systemd-logind[1294]: Removed session 9. May 8 00:39:40.807255 kernel: audit: type=1104 audit(1746664780.797:223): pid=1489 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 8 00:39:40.807322 kernel: audit: type=1106 audit(1746664780.798:224): pid=1484 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:39:40.798000 audit[1484]: USER_END pid=1484 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:39:40.811823 kernel: audit: type=1104 audit(1746664780.798:225): pid=1484 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:39:40.798000 audit[1484]: CRED_DISP pid=1484 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:39:40.815437 kernel: audit: type=1131 audit(1746664780.798:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.16:22-10.0.0.1:48274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:40.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.16:22-10.0.0.1:48274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:39:41.050437 kubelet[2259]: E0508 00:39:41.050313 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:41.433182 kubelet[2259]: E0508 00:39:41.432981 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:41.433182 kubelet[2259]: E0508 00:39:41.433025 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:46.958956 kubelet[2259]: E0508 00:39:46.958893 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:47.443449 kubelet[2259]: E0508 00:39:47.443275 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:48.048991 kubelet[2259]: I0508 00:39:48.048937 2259 topology_manager.go:215] "Topology Admit Handler" podUID="e7d6d4a3-b175-4de9-a48c-adc602706dbd" podNamespace="kube-system" podName="kube-proxy-92d2z" May 8 00:39:48.074667 kubelet[2259]: I0508 00:39:48.074619 2259 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:39:48.075241 env[1314]: time="2025-05-08T00:39:48.075180836Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:39:48.078857 kubelet[2259]: I0508 00:39:48.076253 2259 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:39:48.109524 kubelet[2259]: I0508 00:39:48.109443 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e7d6d4a3-b175-4de9-a48c-adc602706dbd-kube-proxy\") pod \"kube-proxy-92d2z\" (UID: \"e7d6d4a3-b175-4de9-a48c-adc602706dbd\") " pod="kube-system/kube-proxy-92d2z" May 8 00:39:48.109722 kubelet[2259]: I0508 00:39:48.109520 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcwdq\" (UniqueName: \"kubernetes.io/projected/e7d6d4a3-b175-4de9-a48c-adc602706dbd-kube-api-access-wcwdq\") pod \"kube-proxy-92d2z\" (UID: \"e7d6d4a3-b175-4de9-a48c-adc602706dbd\") " pod="kube-system/kube-proxy-92d2z" May 8 00:39:48.109722 kubelet[2259]: I0508 00:39:48.109560 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7d6d4a3-b175-4de9-a48c-adc602706dbd-xtables-lock\") pod \"kube-proxy-92d2z\" (UID: \"e7d6d4a3-b175-4de9-a48c-adc602706dbd\") " pod="kube-system/kube-proxy-92d2z" May 8 00:39:48.109722 kubelet[2259]: I0508 00:39:48.109578 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7d6d4a3-b175-4de9-a48c-adc602706dbd-lib-modules\") pod \"kube-proxy-92d2z\" (UID: \"e7d6d4a3-b175-4de9-a48c-adc602706dbd\") " pod="kube-system/kube-proxy-92d2z" May 8 00:39:48.216291 kubelet[2259]: E0508 00:39:48.216166 2259 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 8 00:39:48.216291 kubelet[2259]: E0508 00:39:48.216223 2259 projected.go:200] Error preparing data for projected volume kube-api-access-wcwdq for pod kube-system/kube-proxy-92d2z: configmap "kube-root-ca.crt" not found May 8 00:39:48.216291 kubelet[2259]: E0508 00:39:48.216301 2259 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7d6d4a3-b175-4de9-a48c-adc602706dbd-kube-api-access-wcwdq podName:e7d6d4a3-b175-4de9-a48c-adc602706dbd nodeName:}" failed. No retries permitted until 2025-05-08 00:39:48.716271921 +0000 UTC m=+13.402682663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wcwdq" (UniqueName: "kubernetes.io/projected/e7d6d4a3-b175-4de9-a48c-adc602706dbd-kube-api-access-wcwdq") pod "kube-proxy-92d2z" (UID: "e7d6d4a3-b175-4de9-a48c-adc602706dbd") : configmap "kube-root-ca.crt" not found May 8 00:39:48.813360 kubelet[2259]: E0508 00:39:48.813318 2259 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 8 00:39:48.813360 kubelet[2259]: E0508 00:39:48.813347 2259 projected.go:200] Error preparing data for projected volume kube-api-access-wcwdq for pod kube-system/kube-proxy-92d2z: configmap "kube-root-ca.crt" not found May 8 00:39:48.813602 kubelet[2259]: E0508 00:39:48.813393 2259 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7d6d4a3-b175-4de9-a48c-adc602706dbd-kube-api-access-wcwdq podName:e7d6d4a3-b175-4de9-a48c-adc602706dbd nodeName:}" failed. No retries permitted until 2025-05-08 00:39:49.813376147 +0000 UTC m=+14.499786879 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wcwdq" (UniqueName: "kubernetes.io/projected/e7d6d4a3-b175-4de9-a48c-adc602706dbd-kube-api-access-wcwdq") pod "kube-proxy-92d2z" (UID: "e7d6d4a3-b175-4de9-a48c-adc602706dbd") : configmap "kube-root-ca.crt" not found May 8 00:39:49.461189 kubelet[2259]: I0508 00:39:49.461110 2259 topology_manager.go:215] "Topology Admit Handler" podUID="3d750aa5-a103-433b-a1d1-a8eb17a4c5c3" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-wl7pz" May 8 00:39:49.617992 kubelet[2259]: I0508 00:39:49.617936 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3d750aa5-a103-433b-a1d1-a8eb17a4c5c3-var-lib-calico\") pod \"tigera-operator-797db67f8-wl7pz\" (UID: \"3d750aa5-a103-433b-a1d1-a8eb17a4c5c3\") " pod="tigera-operator/tigera-operator-797db67f8-wl7pz" May 8 00:39:49.617992 kubelet[2259]: I0508 00:39:49.617977 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg62d\" (UniqueName: \"kubernetes.io/projected/3d750aa5-a103-433b-a1d1-a8eb17a4c5c3-kube-api-access-fg62d\") pod \"tigera-operator-797db67f8-wl7pz\" (UID: \"3d750aa5-a103-433b-a1d1-a8eb17a4c5c3\") " pod="tigera-operator/tigera-operator-797db67f8-wl7pz" May 8 00:39:49.854868 kubelet[2259]: E0508 00:39:49.854658 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:49.855347 env[1314]: time="2025-05-08T00:39:49.855294818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-92d2z,Uid:e7d6d4a3-b175-4de9-a48c-adc602706dbd,Namespace:kube-system,Attempt:0,}" May 8 00:39:50.065244 env[1314]: time="2025-05-08T00:39:50.065151524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:50.065244 env[1314]: time="2025-05-08T00:39:50.065211389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:50.065244 env[1314]: time="2025-05-08T00:39:50.065225996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:50.065465 env[1314]: time="2025-05-08T00:39:50.065400920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-wl7pz,Uid:3d750aa5-a103-433b-a1d1-a8eb17a4c5c3,Namespace:tigera-operator,Attempt:0,}" May 8 00:39:50.066088 env[1314]: time="2025-05-08T00:39:50.066000214Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/798022e4d2567ac542f31473ba2a1df248b71d20bf49c4580c3cb1934a34d3e6 pid=2354 runtime=io.containerd.runc.v2 May 8 00:39:50.101752 env[1314]: time="2025-05-08T00:39:50.100245162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:50.101752 env[1314]: time="2025-05-08T00:39:50.100359069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:50.101752 env[1314]: time="2025-05-08T00:39:50.100411288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:50.101752 env[1314]: time="2025-05-08T00:39:50.100637631Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/62f3734bd105bc637fffea77d599a2fc32fe2033cdb2381ba46d96556ffac2ff pid=2388 runtime=io.containerd.runc.v2 May 8 00:39:50.115593 env[1314]: time="2025-05-08T00:39:50.114266207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-92d2z,Uid:e7d6d4a3-b175-4de9-a48c-adc602706dbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"798022e4d2567ac542f31473ba2a1df248b71d20bf49c4580c3cb1934a34d3e6\"" May 8 00:39:50.116794 kubelet[2259]: E0508 00:39:50.116766 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:50.119148 env[1314]: time="2025-05-08T00:39:50.119119078Z" level=info msg="CreateContainer within sandbox \"798022e4d2567ac542f31473ba2a1df248b71d20bf49c4580c3cb1934a34d3e6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:39:50.151849 env[1314]: time="2025-05-08T00:39:50.151766256Z" level=info msg="CreateContainer within sandbox \"798022e4d2567ac542f31473ba2a1df248b71d20bf49c4580c3cb1934a34d3e6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c2f9a6131bfff59ef824192a718da567d00aa971615267b6b064da5de1b52bf1\"" May 8 00:39:50.153735 env[1314]: time="2025-05-08T00:39:50.152694798Z" level=info msg="StartContainer for \"c2f9a6131bfff59ef824192a718da567d00aa971615267b6b064da5de1b52bf1\"" May 8 00:39:50.158436 env[1314]: time="2025-05-08T00:39:50.158376821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-wl7pz,Uid:3d750aa5-a103-433b-a1d1-a8eb17a4c5c3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"62f3734bd105bc637fffea77d599a2fc32fe2033cdb2381ba46d96556ffac2ff\"" May 8 00:39:50.161557 env[1314]: time="2025-05-08T00:39:50.161528505Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 00:39:50.209928 env[1314]: time="2025-05-08T00:39:50.209830767Z" level=info msg="StartContainer for \"c2f9a6131bfff59ef824192a718da567d00aa971615267b6b064da5de1b52bf1\" returns successfully" May 8 00:39:50.279000 audit[2490]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.292606 kernel: audit: type=1325 audit(1746664790.279:227): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.292779 kernel: audit: type=1300 audit(1746664790.279:227): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf507cae0 a2=0 a3=7ffcf507cacc items=0 ppid=2447 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.292807 kernel: audit: type=1327 audit(1746664790.279:227): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 8 00:39:50.279000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf507cae0 a2=0 a3=7ffcf507cacc items=0 ppid=2447 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.279000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 8 00:39:50.279000 audit[2491]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.279000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc54be7d40 a2=0 a3=7ffc54be7d2c items=0 ppid=2447 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.300073 kernel: audit: type=1325 audit(1746664790.279:228): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.300150 kernel: audit: type=1300 audit(1746664790.279:228): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc54be7d40 a2=0 a3=7ffc54be7d2c items=0 ppid=2447 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.300181 kernel: audit: type=1327 audit(1746664790.279:228): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 8 00:39:50.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 8 00:39:50.282000 audit[2492]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.305203 kernel: audit: type=1325 audit(1746664790.282:229): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.305269 kernel: audit: type=1300 audit(1746664790.282:229): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc1017510 a2=0 a3=7ffcc10174fc items=0 ppid=2447 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.282000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc1017510 a2=0 a3=7ffcc10174fc items=0 ppid=2447 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.282000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 8 00:39:50.313095 kernel: audit: type=1327 audit(1746664790.282:229): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 8 00:39:50.313154 kernel: audit: type=1325 audit(1746664790.282:230): table=nat:41 family=2 entries=1 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.282000 audit[2493]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.282000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd76af7120 a2=0 a3=7ffd76af710c items=0 ppid=2447 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.282000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 8 00:39:50.284000 audit[2495]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.284000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc5c7070a0 a2=0 a3=7ffc5c70708c items=0 ppid=2447 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.284000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 8 00:39:50.284000 audit[2494]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2494 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.284000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7219d480 a2=0 a3=7ffd7219d46c items=0 ppid=2447 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.284000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 8 00:39:50.381000 audit[2496]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.381000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe5a2c41a0 a2=0 a3=7ffe5a2c418c items=0 ppid=2447 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.381000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 8 00:39:50.385000 audit[2498]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.385000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc4c3f2ad0 a2=0 a3=7ffc4c3f2abc items=0 ppid=2447 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.385000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 May 8 00:39:50.389000 audit[2501]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.389000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffced30e300 a2=0 a3=7ffced30e2ec items=0 ppid=2447 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.389000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 May 8 00:39:50.390000 audit[2502]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.390000 audit[2502]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1f8e7e50 a2=0 a3=7ffe1f8e7e3c items=0 ppid=2447 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.390000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 8 00:39:50.392000 audit[2504]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2504 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.392000 audit[2504]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffed23892a0 a2=0 a3=7ffed238928c items=0 ppid=2447 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 8 00:39:50.393000 audit[2505]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.393000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd39c0c590 a2=0 a3=7ffd39c0c57c items=0 ppid=2447 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.393000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 8 00:39:50.396000 audit[2507]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.396000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcb8348920 a2=0 a3=7ffcb834890c items=0 ppid=2447 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.396000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 8 00:39:50.399000 audit[2510]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.399000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd300bea30 a2=0 a3=7ffd300bea1c items=0 ppid=2447 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.399000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 May 8 00:39:50.400000 audit[2511]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.400000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb4906c70 a2=0 a3=7ffeb4906c5c items=0 ppid=2447 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.400000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 8 00:39:50.403000 audit[2513]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2513 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.403000 audit[2513]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffee03b05a0 a2=0 a3=7ffee03b058c items=0 ppid=2447 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.403000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 8 00:39:50.404000 audit[2514]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.404000 audit[2514]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb8aab690 a2=0 a3=7ffcb8aab67c items=0 ppid=2447 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.404000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 8 00:39:50.406000 audit[2516]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.406000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd27db3100 a2=0 a3=7ffd27db30ec items=0 ppid=2447 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.406000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 8 00:39:50.409000 audit[2519]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.409000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe4cdcc660 a2=0 a3=7ffe4cdcc64c items=0 ppid=2447 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.409000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 8 00:39:50.413000 audit[2522]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2522 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.413000 audit[2522]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdd7f89c00 a2=0 a3=7ffdd7f89bec items=0 ppid=2447 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 8 00:39:50.414000 audit[2523]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.414000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe9a7624f0 a2=0 a3=7ffe9a7624dc items=0 ppid=2447 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.414000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 8 00:39:50.417000 audit[2525]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.417000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffae456c10 a2=0 a3=7fffae456bfc items=0 ppid=2447 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.417000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 8 00:39:50.419000 audit[2528]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.419000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdab75b3c0 a2=0 a3=7ffdab75b3ac items=0 ppid=2447 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.419000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 8 00:39:50.421000 audit[2529]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.421000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2ba29620 a2=0 a3=7fff2ba2960c items=0 ppid=2447 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.421000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 8 00:39:50.423000 audit[2531]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2531 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 8 00:39:50.423000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffee2365fa0 a2=0 a3=7ffee2365f8c items=0 ppid=2447 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.423000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 8 00:39:50.447000 audit[2537]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:39:50.447000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffff66784e0 a2=0 a3=7ffff66784cc items=0 ppid=2447 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.447000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:39:50.449530 kubelet[2259]: E0508 00:39:50.449498 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:50.467000 audit[2537]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:39:50.467000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffff66784e0 a2=0 a3=7ffff66784cc items=0 ppid=2447 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.467000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:39:50.469000 audit[2542]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.469000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffeda64fa90 a2=0 a3=7ffeda64fa7c items=0 ppid=2447 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.469000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 8 00:39:50.471000 audit[2544]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.471000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd888e4ed0 a2=0 a3=7ffd888e4ebc items=0 ppid=2447 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.471000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 May 8 00:39:50.475000 audit[2547]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.475000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe405f58e0 a2=0 a3=7ffe405f58cc items=0 ppid=2447 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.475000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 May 8 00:39:50.476000 audit[2548]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.476000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe98a8a850 a2=0 a3=7ffe98a8a83c items=0 ppid=2447 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.476000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 8 00:39:50.479000 audit[2550]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2550 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.479000 audit[2550]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdfae61ef0 a2=0 a3=7ffdfae61edc items=0 ppid=2447 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.479000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 8 00:39:50.481464 kubelet[2259]: I0508 00:39:50.481174 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-92d2z" podStartSLOduration=2.481152854 podStartE2EDuration="2.481152854s" podCreationTimestamp="2025-05-08 00:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:50.479643354 +0000 UTC m=+15.166054116" watchObservedRunningTime="2025-05-08 00:39:50.481152854 +0000 UTC m=+15.167563587" May 8 00:39:50.480000 audit[2551]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2551 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.480000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff36122bf0 a2=0 a3=7fff36122bdc items=0 ppid=2447 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.480000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 8 00:39:50.482000 audit[2553]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2553 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.482000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe904b3860 a2=0 a3=7ffe904b384c items=0 ppid=2447 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.482000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 May 8 00:39:50.486000 audit[2556]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.486000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff908687e0 a2=0 a3=7fff908687cc items=0 ppid=2447 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.486000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 8 00:39:50.488000 audit[2557]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.488000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff72351cc0 a2=0 a3=7fff72351cac items=0 ppid=2447 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.488000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 8 00:39:50.490000 audit[2559]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2559 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.490000 audit[2559]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd13748360 a2=0 a3=7ffd1374834c items=0 ppid=2447 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.490000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 8 00:39:50.492000 audit[2560]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2560 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.492000 audit[2560]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe879fa3d0 a2=0 a3=7ffe879fa3bc items=0 ppid=2447 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.492000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 8 00:39:50.494000 audit[2562]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2562 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.494000 audit[2562]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeb8467760 a2=0 a3=7ffeb846774c items=0 ppid=2447 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.494000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 8 00:39:50.499000 audit[2565]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2565 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.499000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf997da40 a2=0 a3=7ffcf997da2c items=0 ppid=2447 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.499000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 8 00:39:50.502000 audit[2568]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2568 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.502000 audit[2568]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe75d14870 a2=0 a3=7ffe75d1485c items=0 ppid=2447 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.502000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C May 8 00:39:50.504000 audit[2569]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2569 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.504000 audit[2569]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd72f682a0 a2=0 a3=7ffd72f6828c items=0 ppid=2447 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.504000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 8 00:39:50.506000 audit[2571]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2571 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.506000 audit[2571]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc1361b3a0 a2=0 a3=7ffc1361b38c items=0 ppid=2447 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.506000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 8 00:39:50.509000 audit[2574]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2574 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.509000 audit[2574]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fffb60f0a60 a2=0 a3=7fffb60f0a4c items=0 ppid=2447 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.509000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 8 00:39:50.510000 audit[2575]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2575 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.510000 audit[2575]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9232a3f0 a2=0 a3=7ffe9232a3dc items=0 ppid=2447 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.510000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 8 00:39:50.513000 audit[2577]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2577 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.513000 audit[2577]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcae0dcda0 a2=0 a3=7ffcae0dcd8c items=0 ppid=2447 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.513000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 8 00:39:50.515000 audit[2578]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2578 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.515000 audit[2578]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffced0523a0 a2=0 a3=7ffced05238c items=0 ppid=2447 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.515000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 8 00:39:50.517000 audit[2580]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2580 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.517000 audit[2580]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd583d20b0 a2=0 a3=7ffd583d209c items=0 ppid=2447 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.517000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 8 00:39:50.520000 audit[2583]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2583 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 8 00:39:50.520000 audit[2583]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffee526fa90 a2=0 a3=7ffee526fa7c items=0 ppid=2447 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.520000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 8 00:39:50.523000 audit[2585]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2585 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 8 00:39:50.523000 audit[2585]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffdef398d10 a2=0 a3=7ffdef398cfc items=0 ppid=2447 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.523000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:39:50.524000 audit[2585]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2585 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 8 00:39:50.524000 audit[2585]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffdef398d10 a2=0 a3=7ffdef398cfc items=0 ppid=2447 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:50.524000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:39:51.763659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765564059.mount: Deactivated successfully. May 8 00:39:52.862973 env[1314]: time="2025-05-08T00:39:52.862890084Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:52.864948 env[1314]: time="2025-05-08T00:39:52.864894655Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:52.867001 env[1314]: time="2025-05-08T00:39:52.866923823Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:52.868832 env[1314]: time="2025-05-08T00:39:52.868764101Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:39:52.869465 env[1314]: time="2025-05-08T00:39:52.869418788Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 8 00:39:52.872500 env[1314]: time="2025-05-08T00:39:52.872458973Z" level=info msg="CreateContainer within sandbox \"62f3734bd105bc637fffea77d599a2fc32fe2033cdb2381ba46d96556ffac2ff\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 00:39:52.887804 env[1314]: time="2025-05-08T00:39:52.887729969Z" level=info msg="CreateContainer within sandbox \"62f3734bd105bc637fffea77d599a2fc32fe2033cdb2381ba46d96556ffac2ff\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"586b56b6278351aa193cc63dfe543996c5f88f762248cc1d32fdc77cc26d82f6\"" May 8 00:39:52.888602 env[1314]: time="2025-05-08T00:39:52.888529373Z" level=info msg="StartContainer for \"586b56b6278351aa193cc63dfe543996c5f88f762248cc1d32fdc77cc26d82f6\"" May 8 00:39:53.288826 env[1314]: time="2025-05-08T00:39:53.288717603Z" level=info msg="StartContainer for \"586b56b6278351aa193cc63dfe543996c5f88f762248cc1d32fdc77cc26d82f6\" returns successfully" May 8 00:39:56.044000 audit[2626]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2626 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:39:56.046570 kernel: kauditd_printk_skb: 143 callbacks suppressed May 8 00:39:56.046624 kernel: audit: type=1325 audit(1746664796.044:278): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2626 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:39:56.044000 audit[2626]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc7f877270 a2=0 a3=7ffc7f87725c items=0 ppid=2447 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:56.055894 kernel: audit: type=1300 audit(1746664796.044:278): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc7f877270 a2=0 a3=7ffc7f87725c items=0 ppid=2447 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:56.055994 kernel: audit: type=1327 audit(1746664796.044:278): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:39:56.044000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:39:56.060000 audit[2626]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2626 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:39:56.060000 audit[2626]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc7f877270 a2=0 a3=0 items=0 ppid=2447 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:56.069013 kernel: audit: type=1325 audit(1746664796.060:279): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2626 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:39:56.069088 kernel: audit: type=1300 audit(1746664796.060:279): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc7f877270 a2=0 a3=0 items=0 ppid=2447 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:56.069140 kernel: audit: type=1327 audit(1746664796.060:279): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:39:56.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:39:56.077000 audit[2628]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2628 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:39:56.077000 audit[2628]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd704acf80 a2=0 a3=7ffd704acf6c items=0 ppid=2447 pid=2628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:56.087085 kernel: audit: type=1325 audit(1746664796.077:280): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2628 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:39:56.087195 kernel: audit: type=1300 audit(1746664796.077:280): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd704acf80 a2=0 a3=7ffd704acf6c items=0 ppid=2447 pid=2628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:56.087219 kernel: audit: type=1327 audit(1746664796.077:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:39:56.077000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:39:56.093000 audit[2628]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2628 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:39:56.093000 audit[2628]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd704acf80 a2=0 a3=0 items=0 ppid=2447 pid=2628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:56.093000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:39:56.097887 kernel: audit: type=1325 audit(1746664796.093:281): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2628 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:39:56.655573 kubelet[2259]: I0508 00:39:56.655486 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-wl7pz" podStartSLOduration=4.944581969 podStartE2EDuration="7.655461143s" podCreationTimestamp="2025-05-08 00:39:49 +0000 UTC" firstStartedPulling="2025-05-08 00:39:50.159600917 +0000 UTC m=+14.846011659" lastFinishedPulling="2025-05-08 00:39:52.870480091 +0000 UTC m=+17.556890833" observedRunningTime="2025-05-08 00:39:53.589245436 +0000 UTC m=+18.275656178" watchObservedRunningTime="2025-05-08 00:39:56.655461143 +0000 UTC m=+21.341871915" May 8 00:39:56.656232 kubelet[2259]: I0508 00:39:56.655672 2259 topology_manager.go:215] "Topology Admit Handler" podUID="c500bc7b-6773-4937-b7ff-3290bb223cb4" podNamespace="calico-system" podName="calico-typha-55c6f854fb-k96wg" May 8 00:39:56.760969 kubelet[2259]: I0508 00:39:56.760890 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c500bc7b-6773-4937-b7ff-3290bb223cb4-typha-certs\") pod \"calico-typha-55c6f854fb-k96wg\" (UID: \"c500bc7b-6773-4937-b7ff-3290bb223cb4\") " pod="calico-system/calico-typha-55c6f854fb-k96wg" May 8 00:39:56.760969 kubelet[2259]: I0508 00:39:56.760954 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c500bc7b-6773-4937-b7ff-3290bb223cb4-tigera-ca-bundle\") pod \"calico-typha-55c6f854fb-k96wg\" (UID: \"c500bc7b-6773-4937-b7ff-3290bb223cb4\") " pod="calico-system/calico-typha-55c6f854fb-k96wg" May 8 00:39:56.761192 kubelet[2259]: I0508 00:39:56.760987 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtzft\" (UniqueName: \"kubernetes.io/projected/c500bc7b-6773-4937-b7ff-3290bb223cb4-kube-api-access-dtzft\") pod \"calico-typha-55c6f854fb-k96wg\" (UID: \"c500bc7b-6773-4937-b7ff-3290bb223cb4\") " pod="calico-system/calico-typha-55c6f854fb-k96wg" May 8 00:39:57.046100 kubelet[2259]: I0508 00:39:57.046048 2259 topology_manager.go:215] "Topology Admit Handler" podUID="8af932a1-2652-43b6-80a3-ba0182b9cf24" podNamespace="calico-system" podName="calico-node-mhdts" May 8 00:39:57.107000 audit[2632]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2632 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:39:57.107000 audit[2632]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7ffd9d2cffa0 a2=0 a3=7ffd9d2cff8c items=0 ppid=2447 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:57.107000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:39:57.119000 audit[2632]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2632 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:39:57.119000 audit[2632]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd9d2cffa0 a2=0 a3=0 items=0 ppid=2447 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:39:57.119000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:39:57.163079 kubelet[2259]: I0508 00:39:57.163025 2259 topology_manager.go:215] "Topology Admit Handler" podUID="a1c58f86-7966-473c-98f3-e00538745ae1" podNamespace="calico-system" podName="csi-node-driver-rrhhb" May 8 00:39:57.163358 kubelet[2259]: E0508 00:39:57.163313 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrhhb" podUID="a1c58f86-7966-473c-98f3-e00538745ae1" May 8 00:39:57.163576 kubelet[2259]: I0508 00:39:57.163554 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-lib-modules\") pod \"calico-node-mhdts\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " pod="calico-system/calico-node-mhdts" May 8 00:39:57.163576 kubelet[2259]: I0508 00:39:57.163573 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-var-run-calico\") pod \"calico-node-mhdts\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " pod="calico-system/calico-node-mhdts" May 8 00:39:57.163677 kubelet[2259]: I0508 00:39:57.163590 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-var-lib-calico\") pod \"calico-node-mhdts\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " pod="calico-system/calico-node-mhdts" May 8 00:39:57.163677 kubelet[2259]: I0508 00:39:57.163605 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-cni-bin-dir\") pod \"calico-node-mhdts\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " pod="calico-system/calico-node-mhdts" May 8 00:39:57.163677 kubelet[2259]: I0508 00:39:57.163619 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-policysync\") pod \"calico-node-mhdts\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " pod="calico-system/calico-node-mhdts" May 8 00:39:57.163677 kubelet[2259]: I0508 00:39:57.163651 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-cni-log-dir\") pod \"calico-node-mhdts\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " pod="calico-system/calico-node-mhdts" May 8 00:39:57.163677 kubelet[2259]: I0508 00:39:57.163674 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8af932a1-2652-43b6-80a3-ba0182b9cf24-tigera-ca-bundle\") pod \"calico-node-mhdts\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " pod="calico-system/calico-node-mhdts" May 8 00:39:57.163869 kubelet[2259]: I0508 00:39:57.163694 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8af932a1-2652-43b6-80a3-ba0182b9cf24-node-certs\") pod \"calico-node-mhdts\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " pod="calico-system/calico-node-mhdts" May 8 00:39:57.163869 kubelet[2259]: I0508 00:39:57.163719 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-flexvol-driver-host\") pod \"calico-node-mhdts\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " pod="calico-system/calico-node-mhdts" May 8 00:39:57.163869 kubelet[2259]: I0508 00:39:57.163735 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-xtables-lock\") pod \"calico-node-mhdts\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " pod="calico-system/calico-node-mhdts" May 8 00:39:57.163869 kubelet[2259]: I0508 00:39:57.163750 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fxst\" (UniqueName: \"kubernetes.io/projected/8af932a1-2652-43b6-80a3-ba0182b9cf24-kube-api-access-7fxst\") pod \"calico-node-mhdts\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " pod="calico-system/calico-node-mhdts" May 8 00:39:57.163869 kubelet[2259]: I0508 00:39:57.163766 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-cni-net-dir\") pod \"calico-node-mhdts\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " pod="calico-system/calico-node-mhdts" May 8 00:39:57.260274 kubelet[2259]: E0508 00:39:57.260213 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:57.261000 env[1314]: time="2025-05-08T00:39:57.260946882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55c6f854fb-k96wg,Uid:c500bc7b-6773-4937-b7ff-3290bb223cb4,Namespace:calico-system,Attempt:0,}" May 8 00:39:57.264981 kubelet[2259]: I0508 00:39:57.264502 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1c58f86-7966-473c-98f3-e00538745ae1-kubelet-dir\") pod \"csi-node-driver-rrhhb\" (UID: \"a1c58f86-7966-473c-98f3-e00538745ae1\") " pod="calico-system/csi-node-driver-rrhhb" May 8 00:39:57.264981 kubelet[2259]: I0508 00:39:57.264550 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a1c58f86-7966-473c-98f3-e00538745ae1-socket-dir\") pod \"csi-node-driver-rrhhb\" (UID: \"a1c58f86-7966-473c-98f3-e00538745ae1\") " pod="calico-system/csi-node-driver-rrhhb" May 8 00:39:57.264981 kubelet[2259]: I0508 00:39:57.264576 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a1c58f86-7966-473c-98f3-e00538745ae1-varrun\") pod \"csi-node-driver-rrhhb\" (UID: \"a1c58f86-7966-473c-98f3-e00538745ae1\") " pod="calico-system/csi-node-driver-rrhhb" May 8 00:39:57.264981 kubelet[2259]: I0508 00:39:57.264595 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkz4g\" (UniqueName: \"kubernetes.io/projected/a1c58f86-7966-473c-98f3-e00538745ae1-kube-api-access-jkz4g\") pod \"csi-node-driver-rrhhb\" (UID: \"a1c58f86-7966-473c-98f3-e00538745ae1\") " pod="calico-system/csi-node-driver-rrhhb" May 8 00:39:57.264981 kubelet[2259]: I0508 00:39:57.264690 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a1c58f86-7966-473c-98f3-e00538745ae1-registration-dir\") pod \"csi-node-driver-rrhhb\" (UID: \"a1c58f86-7966-473c-98f3-e00538745ae1\") " pod="calico-system/csi-node-driver-rrhhb" May 8 00:39:57.266456 kubelet[2259]: E0508 00:39:57.266429 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.266571 kubelet[2259]: W0508 00:39:57.266548 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.266799 kubelet[2259]: E0508 00:39:57.266763 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.267044 kubelet[2259]: E0508 00:39:57.267026 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.267044 kubelet[2259]: W0508 00:39:57.267041 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.267118 kubelet[2259]: E0508 00:39:57.267054 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.267367 kubelet[2259]: E0508 00:39:57.267349 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.267459 kubelet[2259]: W0508 00:39:57.267439 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.267547 kubelet[2259]: E0508 00:39:57.267530 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.267821 kubelet[2259]: E0508 00:39:57.267810 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.267925 kubelet[2259]: W0508 00:39:57.267909 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.268006 kubelet[2259]: E0508 00:39:57.267988 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.272310 kubelet[2259]: E0508 00:39:57.272286 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.272470 kubelet[2259]: W0508 00:39:57.272450 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.272562 kubelet[2259]: E0508 00:39:57.272543 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.285113 kubelet[2259]: E0508 00:39:57.278825 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.285113 kubelet[2259]: W0508 00:39:57.278874 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.285113 kubelet[2259]: E0508 00:39:57.278901 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.291930 env[1314]: time="2025-05-08T00:39:57.291435738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:57.291930 env[1314]: time="2025-05-08T00:39:57.291479561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:57.291930 env[1314]: time="2025-05-08T00:39:57.291492996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:57.295485 env[1314]: time="2025-05-08T00:39:57.292791085Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e29a1e9f5dfede2a74177ffb101d742fdd6c95ecc607fd2be40af166f522d5f8 pid=2649 runtime=io.containerd.runc.v2 May 8 00:39:57.348235 env[1314]: time="2025-05-08T00:39:57.348055121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55c6f854fb-k96wg,Uid:c500bc7b-6773-4937-b7ff-3290bb223cb4,Namespace:calico-system,Attempt:0,} returns sandbox id \"e29a1e9f5dfede2a74177ffb101d742fdd6c95ecc607fd2be40af166f522d5f8\"" May 8 00:39:57.350746 kubelet[2259]: E0508 00:39:57.350229 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:57.351411 env[1314]: time="2025-05-08T00:39:57.351366057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mhdts,Uid:8af932a1-2652-43b6-80a3-ba0182b9cf24,Namespace:calico-system,Attempt:0,}" May 8 00:39:57.351528 kubelet[2259]: E0508 00:39:57.351499 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:57.352351 env[1314]: time="2025-05-08T00:39:57.352319110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 00:39:57.366485 kubelet[2259]: E0508 00:39:57.366447 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.366485 kubelet[2259]: W0508 00:39:57.366477 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.366711 kubelet[2259]: E0508 00:39:57.366501 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.367831 kubelet[2259]: E0508 00:39:57.367798 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.367966 kubelet[2259]: W0508 00:39:57.367830 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.367966 kubelet[2259]: E0508 00:39:57.367893 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.368320 kubelet[2259]: E0508 00:39:57.368307 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.368320 kubelet[2259]: W0508 00:39:57.368318 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.368414 kubelet[2259]: E0508 00:39:57.368372 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.369229 kubelet[2259]: E0508 00:39:57.368660 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.369229 kubelet[2259]: W0508 00:39:57.368673 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.369229 kubelet[2259]: E0508 00:39:57.368750 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.369229 kubelet[2259]: E0508 00:39:57.368893 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.369229 kubelet[2259]: W0508 00:39:57.368901 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.369229 kubelet[2259]: E0508 00:39:57.368914 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.369229 kubelet[2259]: E0508 00:39:57.369080 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.369229 kubelet[2259]: W0508 00:39:57.369088 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.369229 kubelet[2259]: E0508 00:39:57.369098 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.369496 kubelet[2259]: E0508 00:39:57.369342 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.369496 kubelet[2259]: W0508 00:39:57.369350 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.369496 kubelet[2259]: E0508 00:39:57.369360 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.369703 kubelet[2259]: E0508 00:39:57.369682 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.369703 kubelet[2259]: W0508 00:39:57.369699 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.369822 kubelet[2259]: E0508 00:39:57.369719 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.369953 kubelet[2259]: E0508 00:39:57.369933 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.369953 kubelet[2259]: W0508 00:39:57.369946 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.370078 kubelet[2259]: E0508 00:39:57.369994 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.370165 kubelet[2259]: E0508 00:39:57.370143 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.370165 kubelet[2259]: W0508 00:39:57.370157 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.370295 kubelet[2259]: E0508 00:39:57.370209 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.370480 kubelet[2259]: E0508 00:39:57.370457 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.370480 kubelet[2259]: W0508 00:39:57.370474 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.370589 kubelet[2259]: E0508 00:39:57.370516 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.370799 kubelet[2259]: E0508 00:39:57.370773 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.370799 kubelet[2259]: W0508 00:39:57.370793 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.370916 kubelet[2259]: E0508 00:39:57.370832 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.371780 kubelet[2259]: E0508 00:39:57.371049 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.371780 kubelet[2259]: W0508 00:39:57.371064 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.371780 kubelet[2259]: E0508 00:39:57.371083 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.371780 kubelet[2259]: E0508 00:39:57.371310 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.371780 kubelet[2259]: W0508 00:39:57.371318 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.371780 kubelet[2259]: E0508 00:39:57.371353 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.371780 kubelet[2259]: E0508 00:39:57.371501 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.371780 kubelet[2259]: W0508 00:39:57.371508 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.371780 kubelet[2259]: E0508 00:39:57.371537 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.371780 kubelet[2259]: E0508 00:39:57.371687 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.372067 kubelet[2259]: W0508 00:39:57.371696 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.372067 kubelet[2259]: E0508 00:39:57.371782 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.372067 kubelet[2259]: E0508 00:39:57.371885 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.372067 kubelet[2259]: W0508 00:39:57.371894 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.372067 kubelet[2259]: E0508 00:39:57.371929 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.372290 kubelet[2259]: E0508 00:39:57.372073 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.372290 kubelet[2259]: W0508 00:39:57.372083 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.372290 kubelet[2259]: E0508 00:39:57.372096 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.372290 kubelet[2259]: E0508 00:39:57.372280 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.372290 kubelet[2259]: W0508 00:39:57.372288 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.372413 kubelet[2259]: E0508 00:39:57.372296 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.372728 kubelet[2259]: E0508 00:39:57.372707 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.372728 kubelet[2259]: W0508 00:39:57.372723 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.372821 kubelet[2259]: E0508 00:39:57.372740 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.372976 kubelet[2259]: E0508 00:39:57.372940 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.372976 kubelet[2259]: W0508 00:39:57.372955 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.373049 kubelet[2259]: E0508 00:39:57.373037 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.373341 kubelet[2259]: E0508 00:39:57.373119 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.373341 kubelet[2259]: W0508 00:39:57.373137 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.373341 kubelet[2259]: E0508 00:39:57.373147 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.373795 kubelet[2259]: E0508 00:39:57.373738 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.373795 kubelet[2259]: W0508 00:39:57.373791 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.374006 kubelet[2259]: E0508 00:39:57.373902 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.374264 kubelet[2259]: E0508 00:39:57.374237 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.374264 kubelet[2259]: W0508 00:39:57.374252 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.374361 kubelet[2259]: E0508 00:39:57.374271 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.374498 kubelet[2259]: E0508 00:39:57.374479 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.374498 kubelet[2259]: W0508 00:39:57.374491 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.374498 kubelet[2259]: E0508 00:39:57.374499 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.382249 kubelet[2259]: E0508 00:39:57.382184 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:39:57.382249 kubelet[2259]: W0508 00:39:57.382220 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:39:57.382510 kubelet[2259]: E0508 00:39:57.382277 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:39:57.394374 env[1314]: time="2025-05-08T00:39:57.393592552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:57.394374 env[1314]: time="2025-05-08T00:39:57.393669548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:57.394374 env[1314]: time="2025-05-08T00:39:57.393743028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:57.394374 env[1314]: time="2025-05-08T00:39:57.393992974Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb pid=2716 runtime=io.containerd.runc.v2 May 8 00:39:57.438358 env[1314]: time="2025-05-08T00:39:57.438239770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mhdts,Uid:8af932a1-2652-43b6-80a3-ba0182b9cf24,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\"" May 8 00:39:57.438968 kubelet[2259]: E0508 00:39:57.438941 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:59.410393 kubelet[2259]: E0508 00:39:59.410330 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrhhb" podUID="a1c58f86-7966-473c-98f3-e00538745ae1" May 8 00:40:00.841857 env[1314]: time="2025-05-08T00:40:00.841769103Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:40:00.843818 env[1314]: time="2025-05-08T00:40:00.843782457Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:40:00.845361 env[1314]: time="2025-05-08T00:40:00.845319236Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:40:00.846826 env[1314]: time="2025-05-08T00:40:00.846779459Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:40:00.847234 env[1314]: time="2025-05-08T00:40:00.847193015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 8 00:40:00.848415 env[1314]: time="2025-05-08T00:40:00.848378707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:40:00.855960 env[1314]: time="2025-05-08T00:40:00.855897018Z" level=info msg="CreateContainer within sandbox \"e29a1e9f5dfede2a74177ffb101d742fdd6c95ecc607fd2be40af166f522d5f8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:40:00.873978 env[1314]: time="2025-05-08T00:40:00.873915844Z" level=info msg="CreateContainer within sandbox \"e29a1e9f5dfede2a74177ffb101d742fdd6c95ecc607fd2be40af166f522d5f8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1da58b7f7f3945987206a563ae69974528e9e180daed21ba792d1cc8feab0fe8\"" May 8 00:40:00.874516 env[1314]: time="2025-05-08T00:40:00.874486127Z" level=info msg="StartContainer for \"1da58b7f7f3945987206a563ae69974528e9e180daed21ba792d1cc8feab0fe8\"" May 8 00:40:00.938558 env[1314]: time="2025-05-08T00:40:00.938501301Z" level=info msg="StartContainer for \"1da58b7f7f3945987206a563ae69974528e9e180daed21ba792d1cc8feab0fe8\" returns successfully" May 8 00:40:01.410068 kubelet[2259]: E0508 00:40:01.409997 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrhhb" podUID="a1c58f86-7966-473c-98f3-e00538745ae1" May 8 00:40:01.473576 kubelet[2259]: E0508 00:40:01.473536 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:01.499573 kubelet[2259]: E0508 00:40:01.499512 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.499573 kubelet[2259]: W0508 00:40:01.499541 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.499573 kubelet[2259]: E0508 00:40:01.499567 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.499910 kubelet[2259]: E0508 00:40:01.499817 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.499910 kubelet[2259]: W0508 00:40:01.499878 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.499984 kubelet[2259]: E0508 00:40:01.499919 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.500257 kubelet[2259]: E0508 00:40:01.500227 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.500257 kubelet[2259]: W0508 00:40:01.500243 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.500257 kubelet[2259]: E0508 00:40:01.500254 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.500523 kubelet[2259]: E0508 00:40:01.500472 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.500523 kubelet[2259]: W0508 00:40:01.500489 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.500523 kubelet[2259]: E0508 00:40:01.500499 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.500818 kubelet[2259]: E0508 00:40:01.500724 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.500818 kubelet[2259]: W0508 00:40:01.500734 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.500818 kubelet[2259]: E0508 00:40:01.500745 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.500995 kubelet[2259]: E0508 00:40:01.500980 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.501042 kubelet[2259]: W0508 00:40:01.500997 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.501042 kubelet[2259]: E0508 00:40:01.501010 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.501281 kubelet[2259]: E0508 00:40:01.501252 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.501281 kubelet[2259]: W0508 00:40:01.501272 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.501395 kubelet[2259]: E0508 00:40:01.501285 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.501539 kubelet[2259]: E0508 00:40:01.501517 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.501539 kubelet[2259]: W0508 00:40:01.501536 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.501656 kubelet[2259]: E0508 00:40:01.501559 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.501860 kubelet[2259]: E0508 00:40:01.501814 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.501860 kubelet[2259]: W0508 00:40:01.501829 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.501860 kubelet[2259]: E0508 00:40:01.501854 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.502032 kubelet[2259]: E0508 00:40:01.502017 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.502032 kubelet[2259]: W0508 00:40:01.502028 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.502111 kubelet[2259]: E0508 00:40:01.502035 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.502261 kubelet[2259]: E0508 00:40:01.502233 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.502261 kubelet[2259]: W0508 00:40:01.502249 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.502261 kubelet[2259]: E0508 00:40:01.502260 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.502566 kubelet[2259]: E0508 00:40:01.502542 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.502566 kubelet[2259]: W0508 00:40:01.502560 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.502744 kubelet[2259]: E0508 00:40:01.502572 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.502787 kubelet[2259]: E0508 00:40:01.502770 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.502787 kubelet[2259]: W0508 00:40:01.502780 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.502990 kubelet[2259]: E0508 00:40:01.502790 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.502990 kubelet[2259]: E0508 00:40:01.502983 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.503060 kubelet[2259]: W0508 00:40:01.502994 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.503060 kubelet[2259]: E0508 00:40:01.503007 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.503246 kubelet[2259]: E0508 00:40:01.503229 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.503378 kubelet[2259]: W0508 00:40:01.503325 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.503378 kubelet[2259]: E0508 00:40:01.503354 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.503709 kubelet[2259]: E0508 00:40:01.503690 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.503709 kubelet[2259]: W0508 00:40:01.503705 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.503808 kubelet[2259]: E0508 00:40:01.503716 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.503980 kubelet[2259]: E0508 00:40:01.503960 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.503980 kubelet[2259]: W0508 00:40:01.503975 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.504117 kubelet[2259]: E0508 00:40:01.503993 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.504225 kubelet[2259]: E0508 00:40:01.504195 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.504225 kubelet[2259]: W0508 00:40:01.504216 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.504328 kubelet[2259]: E0508 00:40:01.504232 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.504472 kubelet[2259]: E0508 00:40:01.504449 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.504472 kubelet[2259]: W0508 00:40:01.504466 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.504588 kubelet[2259]: E0508 00:40:01.504482 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.504756 kubelet[2259]: E0508 00:40:01.504734 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.504756 kubelet[2259]: W0508 00:40:01.504753 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.504888 kubelet[2259]: E0508 00:40:01.504773 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.505246 kubelet[2259]: E0508 00:40:01.505156 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.505327 kubelet[2259]: W0508 00:40:01.505244 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.505369 kubelet[2259]: E0508 00:40:01.505327 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.505591 kubelet[2259]: E0508 00:40:01.505575 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.505667 kubelet[2259]: W0508 00:40:01.505593 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.505667 kubelet[2259]: E0508 00:40:01.505614 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.505895 kubelet[2259]: E0508 00:40:01.505883 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.505895 kubelet[2259]: W0508 00:40:01.505895 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.505976 kubelet[2259]: E0508 00:40:01.505913 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.506117 kubelet[2259]: E0508 00:40:01.506105 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.506117 kubelet[2259]: W0508 00:40:01.506115 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.506208 kubelet[2259]: E0508 00:40:01.506149 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.506335 kubelet[2259]: E0508 00:40:01.506314 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.506335 kubelet[2259]: W0508 00:40:01.506327 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.506463 kubelet[2259]: E0508 00:40:01.506426 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.507002 kubelet[2259]: E0508 00:40:01.506968 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.507002 kubelet[2259]: W0508 00:40:01.506983 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.507002 kubelet[2259]: E0508 00:40:01.507000 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.507195 kubelet[2259]: E0508 00:40:01.507179 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.507195 kubelet[2259]: W0508 00:40:01.507191 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.507282 kubelet[2259]: E0508 00:40:01.507204 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.507375 kubelet[2259]: E0508 00:40:01.507359 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.507375 kubelet[2259]: W0508 00:40:01.507370 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.507447 kubelet[2259]: E0508 00:40:01.507402 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.507544 kubelet[2259]: E0508 00:40:01.507529 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.507544 kubelet[2259]: W0508 00:40:01.507541 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.507633 kubelet[2259]: E0508 00:40:01.507570 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.507702 kubelet[2259]: E0508 00:40:01.507687 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.507702 kubelet[2259]: W0508 00:40:01.507699 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.507794 kubelet[2259]: E0508 00:40:01.507712 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.507944 kubelet[2259]: E0508 00:40:01.507926 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.507944 kubelet[2259]: W0508 00:40:01.507941 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.508033 kubelet[2259]: E0508 00:40:01.507959 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.508259 kubelet[2259]: E0508 00:40:01.508232 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.508259 kubelet[2259]: W0508 00:40:01.508252 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.508337 kubelet[2259]: E0508 00:40:01.508270 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.508479 kubelet[2259]: E0508 00:40:01.508467 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.508479 kubelet[2259]: W0508 00:40:01.508477 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.508552 kubelet[2259]: E0508 00:40:01.508487 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.474342 kubelet[2259]: I0508 00:40:02.474302 2259 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:02.479991 kubelet[2259]: E0508 00:40:02.479953 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:02.514472 kubelet[2259]: E0508 00:40:02.514428 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.514472 kubelet[2259]: W0508 00:40:02.514460 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.514716 kubelet[2259]: E0508 00:40:02.514494 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.514799 kubelet[2259]: E0508 00:40:02.514782 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.514799 kubelet[2259]: W0508 00:40:02.514797 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.514909 kubelet[2259]: E0508 00:40:02.514810 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.515062 kubelet[2259]: E0508 00:40:02.515034 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.515062 kubelet[2259]: W0508 00:40:02.515051 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.515150 kubelet[2259]: E0508 00:40:02.515062 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.515487 kubelet[2259]: E0508 00:40:02.515448 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.515487 kubelet[2259]: W0508 00:40:02.515473 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.515583 kubelet[2259]: E0508 00:40:02.515487 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.515739 kubelet[2259]: E0508 00:40:02.515720 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.515739 kubelet[2259]: W0508 00:40:02.515732 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.515870 kubelet[2259]: E0508 00:40:02.515749 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.515995 kubelet[2259]: E0508 00:40:02.515976 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.515995 kubelet[2259]: W0508 00:40:02.515989 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.516094 kubelet[2259]: E0508 00:40:02.516006 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.516235 kubelet[2259]: E0508 00:40:02.516219 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.516235 kubelet[2259]: W0508 00:40:02.516231 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.516328 kubelet[2259]: E0508 00:40:02.516241 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.516448 kubelet[2259]: E0508 00:40:02.516429 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.516448 kubelet[2259]: W0508 00:40:02.516446 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.516548 kubelet[2259]: E0508 00:40:02.516461 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.516686 kubelet[2259]: E0508 00:40:02.516668 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.516686 kubelet[2259]: W0508 00:40:02.516680 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.516784 kubelet[2259]: E0508 00:40:02.516696 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.516943 kubelet[2259]: E0508 00:40:02.516910 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.516943 kubelet[2259]: W0508 00:40:02.516925 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.516943 kubelet[2259]: E0508 00:40:02.516937 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.517262 kubelet[2259]: E0508 00:40:02.517110 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.517262 kubelet[2259]: W0508 00:40:02.517121 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.517262 kubelet[2259]: E0508 00:40:02.517132 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.517397 kubelet[2259]: E0508 00:40:02.517330 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.517397 kubelet[2259]: W0508 00:40:02.517345 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.517397 kubelet[2259]: E0508 00:40:02.517359 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.517581 kubelet[2259]: E0508 00:40:02.517559 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.517581 kubelet[2259]: W0508 00:40:02.517573 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.517581 kubelet[2259]: E0508 00:40:02.517584 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.517796 kubelet[2259]: E0508 00:40:02.517763 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.517796 kubelet[2259]: W0508 00:40:02.517779 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.517796 kubelet[2259]: E0508 00:40:02.517789 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.518063 kubelet[2259]: E0508 00:40:02.518017 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.518063 kubelet[2259]: W0508 00:40:02.518028 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.518063 kubelet[2259]: E0508 00:40:02.518038 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.615463 kubelet[2259]: E0508 00:40:02.615413 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.615463 kubelet[2259]: W0508 00:40:02.615442 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.615463 kubelet[2259]: E0508 00:40:02.615467 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.615737 kubelet[2259]: E0508 00:40:02.615671 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.615737 kubelet[2259]: W0508 00:40:02.615685 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.615737 kubelet[2259]: E0508 00:40:02.615700 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.616000 kubelet[2259]: E0508 00:40:02.615967 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.616000 kubelet[2259]: W0508 00:40:02.615983 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.616000 kubelet[2259]: E0508 00:40:02.615999 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.616251 kubelet[2259]: E0508 00:40:02.616214 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.616251 kubelet[2259]: W0508 00:40:02.616239 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.616339 kubelet[2259]: E0508 00:40:02.616263 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.616534 kubelet[2259]: E0508 00:40:02.616507 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.616534 kubelet[2259]: W0508 00:40:02.616531 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.616610 kubelet[2259]: E0508 00:40:02.616548 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.616742 kubelet[2259]: E0508 00:40:02.616717 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.616742 kubelet[2259]: W0508 00:40:02.616732 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.616862 kubelet[2259]: E0508 00:40:02.616749 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.616965 kubelet[2259]: E0508 00:40:02.616936 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.616965 kubelet[2259]: W0508 00:40:02.616948 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.616965 kubelet[2259]: E0508 00:40:02.616964 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.617196 kubelet[2259]: E0508 00:40:02.617180 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.617248 kubelet[2259]: W0508 00:40:02.617197 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.617248 kubelet[2259]: E0508 00:40:02.617220 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.617529 kubelet[2259]: E0508 00:40:02.617502 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.617529 kubelet[2259]: W0508 00:40:02.617515 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.617638 kubelet[2259]: E0508 00:40:02.617573 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.617721 kubelet[2259]: E0508 00:40:02.617703 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.617721 kubelet[2259]: W0508 00:40:02.617712 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.617781 kubelet[2259]: E0508 00:40:02.617736 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.617861 kubelet[2259]: E0508 00:40:02.617847 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.617861 kubelet[2259]: W0508 00:40:02.617856 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.617922 kubelet[2259]: E0508 00:40:02.617868 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.618061 kubelet[2259]: E0508 00:40:02.618047 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.618061 kubelet[2259]: W0508 00:40:02.618056 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.618138 kubelet[2259]: E0508 00:40:02.618067 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.618273 kubelet[2259]: E0508 00:40:02.618229 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.618273 kubelet[2259]: W0508 00:40:02.618257 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.618273 kubelet[2259]: E0508 00:40:02.618276 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.618528 kubelet[2259]: E0508 00:40:02.618467 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.618528 kubelet[2259]: W0508 00:40:02.618477 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.618528 kubelet[2259]: E0508 00:40:02.618486 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.618644 kubelet[2259]: E0508 00:40:02.618622 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.618644 kubelet[2259]: W0508 00:40:02.618628 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.618644 kubelet[2259]: E0508 00:40:02.618639 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.618815 kubelet[2259]: E0508 00:40:02.618800 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.618815 kubelet[2259]: W0508 00:40:02.618810 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.618987 kubelet[2259]: E0508 00:40:02.618821 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.619048 kubelet[2259]: E0508 00:40:02.619028 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.619088 kubelet[2259]: W0508 00:40:02.619046 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.619088 kubelet[2259]: E0508 00:40:02.619065 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:02.619256 kubelet[2259]: E0508 00:40:02.619238 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:02.619256 kubelet[2259]: W0508 00:40:02.619253 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:02.619335 kubelet[2259]: E0508 00:40:02.619264 2259 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:03.202566 env[1314]: time="2025-05-08T00:40:03.202496991Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:40:03.204860 env[1314]: time="2025-05-08T00:40:03.204773070Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:40:03.206487 env[1314]: time="2025-05-08T00:40:03.206444072Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:40:03.208160 env[1314]: time="2025-05-08T00:40:03.208122587Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:40:03.208603 env[1314]: time="2025-05-08T00:40:03.208569265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 8 00:40:03.211527 env[1314]: time="2025-05-08T00:40:03.211490439Z" level=info msg="CreateContainer within sandbox \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:40:03.231190 env[1314]: time="2025-05-08T00:40:03.231138365Z" level=info msg="CreateContainer within sandbox \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"617f3739a38e4624f40ba2d60fb1e4038a586d1d1f4555f3db6f387b2a1a43dd\"" May 8 00:40:03.231706 env[1314]: time="2025-05-08T00:40:03.231680454Z" level=info msg="StartContainer for \"617f3739a38e4624f40ba2d60fb1e4038a586d1d1f4555f3db6f387b2a1a43dd\"" May 8 00:40:03.275998 env[1314]: time="2025-05-08T00:40:03.275925290Z" level=info msg="StartContainer for \"617f3739a38e4624f40ba2d60fb1e4038a586d1d1f4555f3db6f387b2a1a43dd\" returns successfully" May 8 00:40:03.346781 env[1314]: time="2025-05-08T00:40:03.346717096Z" level=info msg="shim disconnected" id=617f3739a38e4624f40ba2d60fb1e4038a586d1d1f4555f3db6f387b2a1a43dd May 8 00:40:03.346781 env[1314]: time="2025-05-08T00:40:03.346774234Z" level=warning msg="cleaning up after shim disconnected" id=617f3739a38e4624f40ba2d60fb1e4038a586d1d1f4555f3db6f387b2a1a43dd namespace=k8s.io May 8 00:40:03.346781 env[1314]: time="2025-05-08T00:40:03.346783221Z" level=info msg="cleaning up dead shim" May 8 00:40:03.353266 env[1314]: time="2025-05-08T00:40:03.353212983Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:40:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2904 runtime=io.containerd.runc.v2\n" May 8 00:40:03.410594 kubelet[2259]: E0508 00:40:03.410516 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrhhb" podUID="a1c58f86-7966-473c-98f3-e00538745ae1" May 8 00:40:03.479231 kubelet[2259]: E0508 00:40:03.478290 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:03.479645 env[1314]: time="2025-05-08T00:40:03.479192181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:40:03.495304 kubelet[2259]: I0508 00:40:03.494956 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55c6f854fb-k96wg" podStartSLOduration=3.9988792159999997 podStartE2EDuration="7.49493128s" podCreationTimestamp="2025-05-08 00:39:56 +0000 UTC" firstStartedPulling="2025-05-08 00:39:57.352079243 +0000 UTC m=+22.038489985" lastFinishedPulling="2025-05-08 00:40:00.848131307 +0000 UTC m=+25.534542049" observedRunningTime="2025-05-08 00:40:01.484180295 +0000 UTC m=+26.170591037" watchObservedRunningTime="2025-05-08 00:40:03.49493128 +0000 UTC m=+28.181342042" May 8 00:40:04.225119 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-617f3739a38e4624f40ba2d60fb1e4038a586d1d1f4555f3db6f387b2a1a43dd-rootfs.mount: Deactivated successfully. May 8 00:40:05.410748 kubelet[2259]: E0508 00:40:05.410674 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrhhb" podUID="a1c58f86-7966-473c-98f3-e00538745ae1" May 8 00:40:06.565882 kubelet[2259]: I0508 00:40:06.565827 2259 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:06.566748 kubelet[2259]: E0508 00:40:06.566577 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:06.613582 kernel: kauditd_printk_skb: 8 callbacks suppressed May 8 00:40:06.613753 kernel: audit: type=1325 audit(1746664806.607:284): table=filter:95 family=2 entries=17 op=nft_register_rule pid=2927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:40:06.613788 kernel: audit: type=1300 audit(1746664806.607:284): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdf8fb01e0 a2=0 a3=7ffdf8fb01cc items=0 ppid=2447 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:06.607000 audit[2927]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=2927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:40:06.607000 audit[2927]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdf8fb01e0 a2=0 a3=7ffdf8fb01cc items=0 ppid=2447 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:06.607000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:40:06.626048 kernel: audit: type=1327 audit(1746664806.607:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:40:06.626190 kernel: audit: type=1325 audit(1746664806.622:285): table=nat:96 family=2 entries=19 op=nft_register_chain pid=2927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:40:06.622000 audit[2927]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=2927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:40:06.629852 kernel: audit: type=1300 audit(1746664806.622:285): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffdf8fb01e0 a2=0 a3=7ffdf8fb01cc items=0 ppid=2447 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:06.622000 audit[2927]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffdf8fb01e0 a2=0 a3=7ffdf8fb01cc items=0 ppid=2447 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:06.622000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:40:06.639946 kernel: audit: type=1327 audit(1746664806.622:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:40:07.410212 kubelet[2259]: E0508 00:40:07.410144 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrhhb" podUID="a1c58f86-7966-473c-98f3-e00538745ae1" May 8 00:40:07.491145 kubelet[2259]: E0508 00:40:07.491102 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:09.410759 kubelet[2259]: E0508 00:40:09.410690 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrhhb" podUID="a1c58f86-7966-473c-98f3-e00538745ae1" May 8 00:40:10.710207 env[1314]: time="2025-05-08T00:40:10.710133848Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:40:10.714351 env[1314]: time="2025-05-08T00:40:10.714235209Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:40:10.717097 env[1314]: time="2025-05-08T00:40:10.717013584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:40:10.719251 env[1314]: time="2025-05-08T00:40:10.719144642Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:40:10.720050 env[1314]: time="2025-05-08T00:40:10.719961509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 8 00:40:10.722984 env[1314]: time="2025-05-08T00:40:10.722921067Z" level=info msg="CreateContainer within sandbox \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:40:10.738661 env[1314]: time="2025-05-08T00:40:10.738548048Z" level=info msg="CreateContainer within sandbox \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"89534fd92ac31e82907bb5ecfb72d84d7bbb26a7ed2a0bb1e61e552122a69321\"" May 8 00:40:10.739417 env[1314]: time="2025-05-08T00:40:10.739365196Z" level=info msg="StartContainer for \"89534fd92ac31e82907bb5ecfb72d84d7bbb26a7ed2a0bb1e61e552122a69321\"" May 8 00:40:10.798546 env[1314]: time="2025-05-08T00:40:10.798485521Z" level=info msg="StartContainer for \"89534fd92ac31e82907bb5ecfb72d84d7bbb26a7ed2a0bb1e61e552122a69321\" returns successfully" May 8 00:40:11.410869 kubelet[2259]: E0508 00:40:11.410782 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrhhb" podUID="a1c58f86-7966-473c-98f3-e00538745ae1" May 8 00:40:11.499715 kubelet[2259]: E0508 00:40:11.499664 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:12.486360 systemd[1]: Started sshd@9-10.0.0.16:22-10.0.0.1:59454.service. May 8 00:40:12.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.16:22-10.0.0.1:59454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:12.492877 kernel: audit: type=1130 audit(1746664812.485:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.16:22-10.0.0.1:59454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:12.502071 kubelet[2259]: E0508 00:40:12.502035 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:12.528000 audit[2962]: USER_ACCT pid=2962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:12.531111 sshd[2962]: Accepted publickey for core from 10.0.0.1 port 59454 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:40:12.534369 sshd[2962]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:40:12.532000 audit[2962]: CRED_ACQ pid=2962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:12.539656 kernel: audit: type=1101 audit(1746664812.528:287): pid=2962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:12.539791 kernel: audit: type=1103 audit(1746664812.532:288): pid=2962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:12.539825 kernel: audit: type=1006 audit(1746664812.532:289): pid=2962 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 May 8 00:40:12.540313 systemd-logind[1294]: New session 10 of user core. May 8 00:40:12.541330 systemd[1]: Started session-10.scope. May 8 00:40:12.532000 audit[2962]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdee88c970 a2=3 a3=0 items=0 ppid=1 pid=2962 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:12.548961 kernel: audit: type=1300 audit(1746664812.532:289): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdee88c970 a2=3 a3=0 items=0 ppid=1 pid=2962 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:12.532000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:40:12.550756 kernel: audit: type=1327 audit(1746664812.532:289): proctitle=737368643A20636F7265205B707269765D May 8 00:40:12.546000 audit[2962]: USER_START pid=2962 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:12.555748 kernel: audit: type=1105 audit(1746664812.546:290): pid=2962 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:12.555824 kernel: audit: type=1103 audit(1746664812.547:291): pid=2965 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:12.547000 audit[2965]: CRED_ACQ pid=2965 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:12.894000 audit[2962]: USER_END pid=2962 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:12.895000 audit[2962]: CRED_DISP pid=2962 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:12.894464 sshd[2962]: pam_unix(sshd:session): session closed for user core May 8 00:40:12.897626 systemd[1]: sshd@9-10.0.0.16:22-10.0.0.1:59454.service: Deactivated successfully. May 8 00:40:12.898739 systemd-logind[1294]: Session 10 logged out. Waiting for processes to exit. May 8 00:40:12.898879 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:40:12.899951 systemd-logind[1294]: Removed session 10. May 8 00:40:12.905760 kernel: audit: type=1106 audit(1746664812.894:292): pid=2962 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:12.905868 kernel: audit: type=1104 audit(1746664812.895:293): pid=2962 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:12.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.16:22-10.0.0.1:59454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:13.410763 kubelet[2259]: E0508 00:40:13.410705 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrhhb" podUID="a1c58f86-7966-473c-98f3-e00538745ae1" May 8 00:40:14.045764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89534fd92ac31e82907bb5ecfb72d84d7bbb26a7ed2a0bb1e61e552122a69321-rootfs.mount: Deactivated successfully. May 8 00:40:14.120674 kubelet[2259]: I0508 00:40:14.120637 2259 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:40:14.236028 env[1314]: time="2025-05-08T00:40:14.235952386Z" level=info msg="shim disconnected" id=89534fd92ac31e82907bb5ecfb72d84d7bbb26a7ed2a0bb1e61e552122a69321 May 8 00:40:14.236028 env[1314]: time="2025-05-08T00:40:14.236019493Z" level=warning msg="cleaning up after shim disconnected" id=89534fd92ac31e82907bb5ecfb72d84d7bbb26a7ed2a0bb1e61e552122a69321 namespace=k8s.io May 8 00:40:14.236028 env[1314]: time="2025-05-08T00:40:14.236039060Z" level=info msg="cleaning up dead shim" May 8 00:40:14.242730 env[1314]: time="2025-05-08T00:40:14.242674947Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:40:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2992 runtime=io.containerd.runc.v2\n" May 8 00:40:14.339605 kubelet[2259]: I0508 00:40:14.339444 2259 topology_manager.go:215] "Topology Admit Handler" podUID="e9d7454a-993f-4132-8ced-f8cdba985c53" podNamespace="kube-system" podName="coredns-7db6d8ff4d-89lsx" May 8 00:40:14.343236 kubelet[2259]: I0508 00:40:14.343172 2259 topology_manager.go:215] "Topology Admit Handler" podUID="35415a0b-9f3d-4f12-b555-b4c08d155deb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xrfkq" May 8 00:40:14.343401 kubelet[2259]: I0508 00:40:14.343370 2259 topology_manager.go:215] "Topology Admit Handler" podUID="5444d20d-8a4f-4e35-a777-fef99f439552" podNamespace="calico-system" podName="calico-kube-controllers-575f4bf5b7-jhlnt" May 8 00:40:14.343481 kubelet[2259]: I0508 00:40:14.343450 2259 topology_manager.go:215] "Topology Admit Handler" podUID="d7ae0688-a473-448c-b8b9-7f2261bb0d9a" podNamespace="calico-apiserver" podName="calico-apiserver-655fb5665b-b5526" May 8 00:40:14.344598 kubelet[2259]: I0508 00:40:14.344559 2259 topology_manager.go:215] "Topology Admit Handler" podUID="3dd70705-8c14-4d08-9f87-66c93e2ace47" podNamespace="calico-apiserver" podName="calico-apiserver-655fb5665b-8tf24" May 8 00:40:14.501332 kubelet[2259]: I0508 00:40:14.501271 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7hxg\" (UniqueName: \"kubernetes.io/projected/3dd70705-8c14-4d08-9f87-66c93e2ace47-kube-api-access-c7hxg\") pod \"calico-apiserver-655fb5665b-8tf24\" (UID: \"3dd70705-8c14-4d08-9f87-66c93e2ace47\") " pod="calico-apiserver/calico-apiserver-655fb5665b-8tf24" May 8 00:40:14.501332 kubelet[2259]: I0508 00:40:14.501321 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdq9t\" (UniqueName: \"kubernetes.io/projected/e9d7454a-993f-4132-8ced-f8cdba985c53-kube-api-access-zdq9t\") pod \"coredns-7db6d8ff4d-89lsx\" (UID: \"e9d7454a-993f-4132-8ced-f8cdba985c53\") " pod="kube-system/coredns-7db6d8ff4d-89lsx" May 8 00:40:14.501549 kubelet[2259]: I0508 00:40:14.501352 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35415a0b-9f3d-4f12-b555-b4c08d155deb-config-volume\") pod \"coredns-7db6d8ff4d-xrfkq\" (UID: \"35415a0b-9f3d-4f12-b555-b4c08d155deb\") " pod="kube-system/coredns-7db6d8ff4d-xrfkq" May 8 00:40:14.501549 kubelet[2259]: I0508 00:40:14.501379 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3dd70705-8c14-4d08-9f87-66c93e2ace47-calico-apiserver-certs\") pod \"calico-apiserver-655fb5665b-8tf24\" (UID: \"3dd70705-8c14-4d08-9f87-66c93e2ace47\") " pod="calico-apiserver/calico-apiserver-655fb5665b-8tf24" May 8 00:40:14.501549 kubelet[2259]: I0508 00:40:14.501401 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d7ae0688-a473-448c-b8b9-7f2261bb0d9a-calico-apiserver-certs\") pod \"calico-apiserver-655fb5665b-b5526\" (UID: \"d7ae0688-a473-448c-b8b9-7f2261bb0d9a\") " pod="calico-apiserver/calico-apiserver-655fb5665b-b5526" May 8 00:40:14.501549 kubelet[2259]: I0508 00:40:14.501450 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwbvw\" (UniqueName: \"kubernetes.io/projected/d7ae0688-a473-448c-b8b9-7f2261bb0d9a-kube-api-access-bwbvw\") pod \"calico-apiserver-655fb5665b-b5526\" (UID: \"d7ae0688-a473-448c-b8b9-7f2261bb0d9a\") " pod="calico-apiserver/calico-apiserver-655fb5665b-b5526" May 8 00:40:14.501549 kubelet[2259]: I0508 00:40:14.501468 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqbgl\" (UniqueName: \"kubernetes.io/projected/35415a0b-9f3d-4f12-b555-b4c08d155deb-kube-api-access-mqbgl\") pod \"coredns-7db6d8ff4d-xrfkq\" (UID: \"35415a0b-9f3d-4f12-b555-b4c08d155deb\") " pod="kube-system/coredns-7db6d8ff4d-xrfkq" May 8 00:40:14.501699 kubelet[2259]: I0508 00:40:14.501489 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5444d20d-8a4f-4e35-a777-fef99f439552-tigera-ca-bundle\") pod \"calico-kube-controllers-575f4bf5b7-jhlnt\" (UID: \"5444d20d-8a4f-4e35-a777-fef99f439552\") " pod="calico-system/calico-kube-controllers-575f4bf5b7-jhlnt" May 8 00:40:14.501699 kubelet[2259]: I0508 00:40:14.501504 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9d7454a-993f-4132-8ced-f8cdba985c53-config-volume\") pod \"coredns-7db6d8ff4d-89lsx\" (UID: \"e9d7454a-993f-4132-8ced-f8cdba985c53\") " pod="kube-system/coredns-7db6d8ff4d-89lsx" May 8 00:40:14.501699 kubelet[2259]: I0508 00:40:14.501568 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h97km\" (UniqueName: \"kubernetes.io/projected/5444d20d-8a4f-4e35-a777-fef99f439552-kube-api-access-h97km\") pod \"calico-kube-controllers-575f4bf5b7-jhlnt\" (UID: \"5444d20d-8a4f-4e35-a777-fef99f439552\") " pod="calico-system/calico-kube-controllers-575f4bf5b7-jhlnt" May 8 00:40:14.507412 kubelet[2259]: E0508 00:40:14.507382 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:14.508026 env[1314]: time="2025-05-08T00:40:14.507995585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:40:14.646023 kubelet[2259]: E0508 00:40:14.645884 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:14.647470 env[1314]: time="2025-05-08T00:40:14.647414762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-89lsx,Uid:e9d7454a-993f-4132-8ced-f8cdba985c53,Namespace:kube-system,Attempt:0,}" May 8 00:40:14.652139 env[1314]: time="2025-05-08T00:40:14.652085568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655fb5665b-b5526,Uid:d7ae0688-a473-448c-b8b9-7f2261bb0d9a,Namespace:calico-apiserver,Attempt:0,}" May 8 00:40:14.654716 kubelet[2259]: E0508 00:40:14.654601 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:14.655310 env[1314]: time="2025-05-08T00:40:14.655267303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xrfkq,Uid:35415a0b-9f3d-4f12-b555-b4c08d155deb,Namespace:kube-system,Attempt:0,}" May 8 00:40:14.656563 env[1314]: time="2025-05-08T00:40:14.656287455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-575f4bf5b7-jhlnt,Uid:5444d20d-8a4f-4e35-a777-fef99f439552,Namespace:calico-system,Attempt:0,}" May 8 00:40:14.656563 env[1314]: time="2025-05-08T00:40:14.656338652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655fb5665b-8tf24,Uid:3dd70705-8c14-4d08-9f87-66c93e2ace47,Namespace:calico-apiserver,Attempt:0,}" May 8 00:40:14.831816 env[1314]: time="2025-05-08T00:40:14.831321687Z" level=error msg="Failed to destroy network for sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.831816 env[1314]: time="2025-05-08T00:40:14.831739178Z" level=error msg="encountered an error cleaning up failed sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.831816 env[1314]: time="2025-05-08T00:40:14.831794823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655fb5665b-8tf24,Uid:3dd70705-8c14-4d08-9f87-66c93e2ace47,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.832422 kubelet[2259]: E0508 00:40:14.832329 2259 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.832422 kubelet[2259]: E0508 00:40:14.832411 2259 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655fb5665b-8tf24" May 8 00:40:14.833118 kubelet[2259]: E0508 00:40:14.832441 2259 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655fb5665b-8tf24" May 8 00:40:14.833118 kubelet[2259]: E0508 00:40:14.832496 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655fb5665b-8tf24_calico-apiserver(3dd70705-8c14-4d08-9f87-66c93e2ace47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655fb5665b-8tf24_calico-apiserver(3dd70705-8c14-4d08-9f87-66c93e2ace47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655fb5665b-8tf24" podUID="3dd70705-8c14-4d08-9f87-66c93e2ace47" May 8 00:40:14.842788 env[1314]: time="2025-05-08T00:40:14.842721105Z" level=error msg="Failed to destroy network for sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.843303 env[1314]: time="2025-05-08T00:40:14.843273310Z" level=error msg="encountered an error cleaning up failed sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.843453 env[1314]: time="2025-05-08T00:40:14.843405390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-89lsx,Uid:e9d7454a-993f-4132-8ced-f8cdba985c53,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.843730 kubelet[2259]: E0508 00:40:14.843677 2259 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.843803 kubelet[2259]: E0508 00:40:14.843756 2259 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-89lsx" May 8 00:40:14.843803 kubelet[2259]: E0508 00:40:14.843783 2259 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-89lsx" May 8 00:40:14.843898 kubelet[2259]: E0508 00:40:14.843846 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-89lsx_kube-system(e9d7454a-993f-4132-8ced-f8cdba985c53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-89lsx_kube-system(e9d7454a-993f-4132-8ced-f8cdba985c53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-89lsx" podUID="e9d7454a-993f-4132-8ced-f8cdba985c53" May 8 00:40:14.844727 env[1314]: time="2025-05-08T00:40:14.844699601Z" level=error msg="Failed to destroy network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.845061 env[1314]: time="2025-05-08T00:40:14.845035807Z" level=error msg="encountered an error cleaning up failed sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.845174 env[1314]: time="2025-05-08T00:40:14.845144904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655fb5665b-b5526,Uid:d7ae0688-a473-448c-b8b9-7f2261bb0d9a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.845723 kubelet[2259]: E0508 00:40:14.845672 2259 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.845795 kubelet[2259]: E0508 00:40:14.845746 2259 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655fb5665b-b5526" May 8 00:40:14.845795 kubelet[2259]: E0508 00:40:14.845772 2259 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655fb5665b-b5526" May 8 00:40:14.845897 kubelet[2259]: E0508 00:40:14.845822 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655fb5665b-b5526_calico-apiserver(d7ae0688-a473-448c-b8b9-7f2261bb0d9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655fb5665b-b5526_calico-apiserver(d7ae0688-a473-448c-b8b9-7f2261bb0d9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655fb5665b-b5526" podUID="d7ae0688-a473-448c-b8b9-7f2261bb0d9a" May 8 00:40:14.850947 env[1314]: time="2025-05-08T00:40:14.850858854Z" level=error msg="Failed to destroy network for sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.851394 env[1314]: time="2025-05-08T00:40:14.851338022Z" level=error msg="encountered an error cleaning up failed sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.851741 env[1314]: time="2025-05-08T00:40:14.851411671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xrfkq,Uid:35415a0b-9f3d-4f12-b555-b4c08d155deb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.851852 kubelet[2259]: E0508 00:40:14.851787 2259 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.851923 kubelet[2259]: E0508 00:40:14.851879 2259 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xrfkq" May 8 00:40:14.851923 kubelet[2259]: E0508 00:40:14.851911 2259 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xrfkq" May 8 00:40:14.851994 kubelet[2259]: E0508 00:40:14.851961 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xrfkq_kube-system(35415a0b-9f3d-4f12-b555-b4c08d155deb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xrfkq_kube-system(35415a0b-9f3d-4f12-b555-b4c08d155deb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xrfkq" podUID="35415a0b-9f3d-4f12-b555-b4c08d155deb" May 8 00:40:14.864161 env[1314]: time="2025-05-08T00:40:14.864066165Z" level=error msg="Failed to destroy network for sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.864454 env[1314]: time="2025-05-08T00:40:14.864418403Z" level=error msg="encountered an error cleaning up failed sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.864507 env[1314]: time="2025-05-08T00:40:14.864464250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-575f4bf5b7-jhlnt,Uid:5444d20d-8a4f-4e35-a777-fef99f439552,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.864768 kubelet[2259]: E0508 00:40:14.864728 2259 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:14.864875 kubelet[2259]: E0508 00:40:14.864796 2259 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-575f4bf5b7-jhlnt" May 8 00:40:14.864875 kubelet[2259]: E0508 00:40:14.864818 2259 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-575f4bf5b7-jhlnt" May 8 00:40:14.864959 kubelet[2259]: E0508 00:40:14.864879 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-575f4bf5b7-jhlnt_calico-system(5444d20d-8a4f-4e35-a777-fef99f439552)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-575f4bf5b7-jhlnt_calico-system(5444d20d-8a4f-4e35-a777-fef99f439552)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-575f4bf5b7-jhlnt" podUID="5444d20d-8a4f-4e35-a777-fef99f439552" May 8 00:40:15.413377 env[1314]: time="2025-05-08T00:40:15.413316950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rrhhb,Uid:a1c58f86-7966-473c-98f3-e00538745ae1,Namespace:calico-system,Attempt:0,}" May 8 00:40:15.509310 kubelet[2259]: I0508 00:40:15.509276 2259 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" May 8 00:40:15.509946 env[1314]: time="2025-05-08T00:40:15.509906629Z" level=info msg="StopPodSandbox for \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\"" May 8 00:40:15.510608 kubelet[2259]: I0508 00:40:15.510579 2259 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" May 8 00:40:15.511001 env[1314]: time="2025-05-08T00:40:15.510968640Z" level=info msg="StopPodSandbox for \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\"" May 8 00:40:15.512965 kubelet[2259]: I0508 00:40:15.512939 2259 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:40:15.514543 env[1314]: time="2025-05-08T00:40:15.514491940Z" level=info msg="StopPodSandbox for \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\"" May 8 00:40:15.515034 kubelet[2259]: I0508 00:40:15.515006 2259 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" May 8 00:40:15.515437 env[1314]: time="2025-05-08T00:40:15.515409639Z" level=info msg="StopPodSandbox for \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\"" May 8 00:40:15.516440 kubelet[2259]: I0508 00:40:15.516403 2259 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" May 8 00:40:15.516868 env[1314]: time="2025-05-08T00:40:15.516823656Z" level=info msg="StopPodSandbox for \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\"" May 8 00:40:15.547039 env[1314]: time="2025-05-08T00:40:15.545955095Z" level=error msg="StopPodSandbox for \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\" failed" error="failed to destroy network for sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.547280 kubelet[2259]: E0508 00:40:15.546150 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" May 8 00:40:15.547280 kubelet[2259]: E0508 00:40:15.546196 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a"} May 8 00:40:15.547280 kubelet[2259]: E0508 00:40:15.546256 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"35415a0b-9f3d-4f12-b555-b4c08d155deb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:15.547280 kubelet[2259]: E0508 00:40:15.546279 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"35415a0b-9f3d-4f12-b555-b4c08d155deb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xrfkq" podUID="35415a0b-9f3d-4f12-b555-b4c08d155deb" May 8 00:40:15.566229 env[1314]: time="2025-05-08T00:40:15.566153329Z" level=error msg="StopPodSandbox for \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\" failed" error="failed to destroy network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.566472 kubelet[2259]: E0508 00:40:15.566417 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:40:15.566575 kubelet[2259]: E0508 00:40:15.566500 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195"} May 8 00:40:15.566575 kubelet[2259]: E0508 00:40:15.566565 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7ae0688-a473-448c-b8b9-7f2261bb0d9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:15.566713 kubelet[2259]: E0508 00:40:15.566599 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7ae0688-a473-448c-b8b9-7f2261bb0d9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655fb5665b-b5526" podUID="d7ae0688-a473-448c-b8b9-7f2261bb0d9a" May 8 00:40:15.574328 env[1314]: time="2025-05-08T00:40:15.574251583Z" level=error msg="StopPodSandbox for \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\" failed" error="failed to destroy network for sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.574596 kubelet[2259]: E0508 00:40:15.574511 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" May 8 00:40:15.574679 kubelet[2259]: E0508 00:40:15.574593 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285"} May 8 00:40:15.574679 kubelet[2259]: E0508 00:40:15.574641 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5444d20d-8a4f-4e35-a777-fef99f439552\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:15.574774 kubelet[2259]: E0508 00:40:15.574671 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5444d20d-8a4f-4e35-a777-fef99f439552\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-575f4bf5b7-jhlnt" podUID="5444d20d-8a4f-4e35-a777-fef99f439552" May 8 00:40:15.577682 env[1314]: time="2025-05-08T00:40:15.577623026Z" level=error msg="StopPodSandbox for \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\" failed" error="failed to destroy network for sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.578097 kubelet[2259]: E0508 00:40:15.578042 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" May 8 00:40:15.578191 kubelet[2259]: E0508 00:40:15.578099 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97"} May 8 00:40:15.578191 kubelet[2259]: E0508 00:40:15.578131 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e9d7454a-993f-4132-8ced-f8cdba985c53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:15.578191 kubelet[2259]: E0508 00:40:15.578171 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e9d7454a-993f-4132-8ced-f8cdba985c53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-89lsx" podUID="e9d7454a-993f-4132-8ced-f8cdba985c53" May 8 00:40:15.579590 env[1314]: time="2025-05-08T00:40:15.579549885Z" level=error msg="StopPodSandbox for \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\" failed" error="failed to destroy network for sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.579732 kubelet[2259]: E0508 00:40:15.579697 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" May 8 00:40:15.579810 kubelet[2259]: E0508 00:40:15.579734 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d"} May 8 00:40:15.579810 kubelet[2259]: E0508 00:40:15.579756 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3dd70705-8c14-4d08-9f87-66c93e2ace47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:15.579810 kubelet[2259]: E0508 00:40:15.579772 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3dd70705-8c14-4d08-9f87-66c93e2ace47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655fb5665b-8tf24" podUID="3dd70705-8c14-4d08-9f87-66c93e2ace47" May 8 00:40:15.676712 env[1314]: time="2025-05-08T00:40:15.676489276Z" level=error msg="Failed to destroy network for sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.677071 env[1314]: time="2025-05-08T00:40:15.677027143Z" level=error msg="encountered an error cleaning up failed sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.677221 env[1314]: time="2025-05-08T00:40:15.677184542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rrhhb,Uid:a1c58f86-7966-473c-98f3-e00538745ae1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.677520 kubelet[2259]: E0508 00:40:15.677479 2259 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.677617 kubelet[2259]: E0508 00:40:15.677561 2259 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rrhhb" May 8 00:40:15.677617 kubelet[2259]: E0508 00:40:15.677587 2259 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rrhhb" May 8 00:40:15.677700 kubelet[2259]: E0508 00:40:15.677641 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rrhhb_calico-system(a1c58f86-7966-473c-98f3-e00538745ae1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rrhhb_calico-system(a1c58f86-7966-473c-98f3-e00538745ae1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rrhhb" podUID="a1c58f86-7966-473c-98f3-e00538745ae1" May 8 00:40:15.680986 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688-shm.mount: Deactivated successfully. May 8 00:40:16.519471 kubelet[2259]: I0508 00:40:16.519424 2259 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" May 8 00:40:16.520217 env[1314]: time="2025-05-08T00:40:16.520174436Z" level=info msg="StopPodSandbox for \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\"" May 8 00:40:16.548469 env[1314]: time="2025-05-08T00:40:16.548394282Z" level=error msg="StopPodSandbox for \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\" failed" error="failed to destroy network for sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:16.548779 kubelet[2259]: E0508 00:40:16.548714 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" May 8 00:40:16.548888 kubelet[2259]: E0508 00:40:16.548792 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688"} May 8 00:40:16.548888 kubelet[2259]: E0508 00:40:16.548869 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a1c58f86-7966-473c-98f3-e00538745ae1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:16.549010 kubelet[2259]: E0508 00:40:16.548904 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a1c58f86-7966-473c-98f3-e00538745ae1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rrhhb" podUID="a1c58f86-7966-473c-98f3-e00538745ae1" May 8 00:40:17.898466 systemd[1]: Started sshd@10-10.0.0.16:22-10.0.0.1:34020.service. May 8 00:40:17.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.16:22-10.0.0.1:34020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:17.900997 kernel: kauditd_printk_skb: 1 callbacks suppressed May 8 00:40:17.901213 kernel: audit: type=1130 audit(1746664817.897:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.16:22-10.0.0.1:34020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:17.935000 audit[3394]: USER_ACCT pid=3394 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:17.936280 sshd[3394]: Accepted publickey for core from 10.0.0.1 port 34020 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:40:17.938366 sshd[3394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:40:17.937000 audit[3394]: CRED_ACQ pid=3394 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:17.942718 systemd-logind[1294]: New session 11 of user core. May 8 00:40:17.943472 systemd[1]: Started session-11.scope. May 8 00:40:17.945623 kernel: audit: type=1101 audit(1746664817.935:296): pid=3394 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:17.945697 kernel: audit: type=1103 audit(1746664817.937:297): pid=3394 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:17.945723 kernel: audit: type=1006 audit(1746664817.937:298): pid=3394 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 May 8 00:40:17.937000 audit[3394]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4b9a9050 a2=3 a3=0 items=0 ppid=1 pid=3394 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:17.937000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:40:17.957104 kernel: audit: type=1300 audit(1746664817.937:298): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4b9a9050 a2=3 a3=0 items=0 ppid=1 pid=3394 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:17.957158 kernel: audit: type=1327 audit(1746664817.937:298): proctitle=737368643A20636F7265205B707269765D May 8 00:40:17.957176 kernel: audit: type=1105 audit(1746664817.948:299): pid=3394 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:17.948000 audit[3394]: USER_START pid=3394 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:17.950000 audit[3397]: CRED_ACQ pid=3397 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:17.971522 kernel: audit: type=1103 audit(1746664817.950:300): pid=3397 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:18.121400 sshd[3394]: pam_unix(sshd:session): session closed for user core May 8 00:40:18.121000 audit[3394]: USER_END pid=3394 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:18.124258 systemd[1]: sshd@10-10.0.0.16:22-10.0.0.1:34020.service: Deactivated successfully. May 8 00:40:18.125469 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:40:18.126369 systemd-logind[1294]: Session 11 logged out. Waiting for processes to exit. May 8 00:40:18.127371 systemd-logind[1294]: Removed session 11. May 8 00:40:18.121000 audit[3394]: CRED_DISP pid=3394 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:18.134120 kernel: audit: type=1106 audit(1746664818.121:301): pid=3394 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:18.134204 kernel: audit: type=1104 audit(1746664818.121:302): pid=3394 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:18.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.16:22-10.0.0.1:34020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:23.132482 kernel: kauditd_printk_skb: 1 callbacks suppressed May 8 00:40:23.132674 kernel: audit: type=1130 audit(1746664823.124:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.16:22-10.0.0.1:34036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:23.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.16:22-10.0.0.1:34036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:23.125310 systemd[1]: Started sshd@11-10.0.0.16:22-10.0.0.1:34036.service. May 8 00:40:23.164000 audit[3411]: USER_ACCT pid=3411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:23.181464 kernel: audit: type=1101 audit(1746664823.164:305): pid=3411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:23.181535 kernel: audit: type=1103 audit(1746664823.169:306): pid=3411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:23.181584 kernel: audit: type=1006 audit(1746664823.169:307): pid=3411 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 May 8 00:40:23.181608 kernel: audit: type=1300 audit(1746664823.169:307): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed818b890 a2=3 a3=0 items=0 ppid=1 pid=3411 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:23.169000 audit[3411]: CRED_ACQ pid=3411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:23.169000 audit[3411]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed818b890 a2=3 a3=0 items=0 ppid=1 pid=3411 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:23.181781 sshd[3411]: Accepted publickey for core from 10.0.0.1 port 34036 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:40:23.170405 sshd[3411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:40:23.175345 systemd-logind[1294]: New session 12 of user core. May 8 00:40:23.175941 systemd[1]: Started session-12.scope. May 8 00:40:23.192162 kernel: audit: type=1327 audit(1746664823.169:307): proctitle=737368643A20636F7265205B707269765D May 8 00:40:23.192260 kernel: audit: type=1105 audit(1746664823.182:308): pid=3411 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:23.192279 kernel: audit: type=1103 audit(1746664823.183:309): pid=3414 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:23.169000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:40:23.182000 audit[3411]: USER_START pid=3411 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:23.183000 audit[3414]: CRED_ACQ pid=3414 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:23.312231 sshd[3411]: pam_unix(sshd:session): session closed for user core May 8 00:40:23.312000 audit[3411]: USER_END pid=3411 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:23.315106 systemd[1]: sshd@11-10.0.0.16:22-10.0.0.1:34036.service: Deactivated successfully. May 8 00:40:23.316524 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:40:23.317295 systemd-logind[1294]: Session 12 logged out. Waiting for processes to exit. May 8 00:40:23.322130 kernel: audit: type=1106 audit(1746664823.312:310): pid=3411 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:23.322207 kernel: audit: type=1104 audit(1746664823.312:311): pid=3411 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:23.312000 audit[3411]: CRED_DISP pid=3411 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:23.318967 systemd-logind[1294]: Removed session 12. May 8 00:40:23.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.16:22-10.0.0.1:34036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:25.601772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1653111912.mount: Deactivated successfully. May 8 00:40:26.411562 env[1314]: time="2025-05-08T00:40:26.411490365Z" level=info msg="StopPodSandbox for \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\"" May 8 00:40:27.360579 env[1314]: time="2025-05-08T00:40:27.360506935Z" level=error msg="StopPodSandbox for \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\" failed" error="failed to destroy network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:27.360992 kubelet[2259]: E0508 00:40:27.360915 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:40:27.361312 kubelet[2259]: E0508 00:40:27.360992 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195"} May 8 00:40:27.361312 kubelet[2259]: E0508 00:40:27.361030 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7ae0688-a473-448c-b8b9-7f2261bb0d9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:27.361312 kubelet[2259]: E0508 00:40:27.361055 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7ae0688-a473-448c-b8b9-7f2261bb0d9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655fb5665b-b5526" podUID="d7ae0688-a473-448c-b8b9-7f2261bb0d9a" May 8 00:40:27.362439 env[1314]: time="2025-05-08T00:40:27.362389269Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:40:27.365945 env[1314]: time="2025-05-08T00:40:27.365898445Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:40:27.368022 env[1314]: time="2025-05-08T00:40:27.367973598Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:40:27.375198 env[1314]: time="2025-05-08T00:40:27.375138391Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:40:27.376431 env[1314]: time="2025-05-08T00:40:27.375801614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 8 00:40:27.388628 env[1314]: time="2025-05-08T00:40:27.388576016Z" level=info msg="CreateContainer within sandbox \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:40:27.411303 env[1314]: time="2025-05-08T00:40:27.411243027Z" level=info msg="StopPodSandbox for \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\"" May 8 00:40:27.411773 env[1314]: time="2025-05-08T00:40:27.411311970Z" level=info msg="StopPodSandbox for \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\"" May 8 00:40:27.446227 env[1314]: time="2025-05-08T00:40:27.446159692Z" level=error msg="StopPodSandbox for \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\" failed" error="failed to destroy network for sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:27.446775 kubelet[2259]: E0508 00:40:27.446709 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" May 8 00:40:27.446906 kubelet[2259]: E0508 00:40:27.446797 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688"} May 8 00:40:27.446951 kubelet[2259]: E0508 00:40:27.446899 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a1c58f86-7966-473c-98f3-e00538745ae1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:27.447019 kubelet[2259]: E0508 00:40:27.446967 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a1c58f86-7966-473c-98f3-e00538745ae1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rrhhb" podUID="a1c58f86-7966-473c-98f3-e00538745ae1" May 8 00:40:27.502510 env[1314]: time="2025-05-08T00:40:27.502439195Z" level=error msg="StopPodSandbox for \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\" failed" error="failed to destroy network for sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:27.502752 kubelet[2259]: E0508 00:40:27.502704 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" May 8 00:40:27.502822 kubelet[2259]: E0508 00:40:27.502766 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285"} May 8 00:40:27.502822 kubelet[2259]: E0508 00:40:27.502804 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5444d20d-8a4f-4e35-a777-fef99f439552\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:27.502972 kubelet[2259]: E0508 00:40:27.502830 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5444d20d-8a4f-4e35-a777-fef99f439552\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-575f4bf5b7-jhlnt" podUID="5444d20d-8a4f-4e35-a777-fef99f439552" May 8 00:40:28.315725 systemd[1]: Started sshd@12-10.0.0.16:22-10.0.0.1:51332.service. May 8 00:40:28.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.16:22-10.0.0.1:51332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:28.320068 kernel: kauditd_printk_skb: 1 callbacks suppressed May 8 00:40:28.320144 kernel: audit: type=1130 audit(1746664828.315:313): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.16:22-10.0.0.1:51332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:28.411023 env[1314]: time="2025-05-08T00:40:28.410977352Z" level=info msg="StopPodSandbox for \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\"" May 8 00:40:28.411177 env[1314]: time="2025-05-08T00:40:28.411063638Z" level=info msg="StopPodSandbox for \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\"" May 8 00:40:28.437270 env[1314]: time="2025-05-08T00:40:28.437192442Z" level=error msg="StopPodSandbox for \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\" failed" error="failed to destroy network for sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:28.437648 kubelet[2259]: E0508 00:40:28.437457 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" May 8 00:40:28.437648 kubelet[2259]: E0508 00:40:28.437521 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a"} May 8 00:40:28.437648 kubelet[2259]: E0508 00:40:28.437554 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"35415a0b-9f3d-4f12-b555-b4c08d155deb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:28.437648 kubelet[2259]: E0508 00:40:28.437577 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"35415a0b-9f3d-4f12-b555-b4c08d155deb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xrfkq" podUID="35415a0b-9f3d-4f12-b555-b4c08d155deb" May 8 00:40:28.439057 env[1314]: time="2025-05-08T00:40:28.438992276Z" level=error msg="StopPodSandbox for \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\" failed" error="failed to destroy network for sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:28.439305 kubelet[2259]: E0508 00:40:28.439256 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" May 8 00:40:28.439374 kubelet[2259]: E0508 00:40:28.439323 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d"} May 8 00:40:28.439374 kubelet[2259]: E0508 00:40:28.439360 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3dd70705-8c14-4d08-9f87-66c93e2ace47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:28.439461 kubelet[2259]: E0508 00:40:28.439384 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3dd70705-8c14-4d08-9f87-66c93e2ace47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655fb5665b-8tf24" podUID="3dd70705-8c14-4d08-9f87-66c93e2ace47" May 8 00:40:28.615000 audit[3498]: USER_ACCT pid=3498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:28.616760 sshd[3498]: Accepted publickey for core from 10.0.0.1 port 51332 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:40:28.654787 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:40:28.658869 kernel: audit: type=1101 audit(1746664828.615:314): pid=3498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:28.617000 audit[3498]: CRED_ACQ pid=3498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:28.663036 systemd-logind[1294]: New session 13 of user core. May 8 00:40:28.663893 systemd[1]: Started session-13.scope. May 8 00:40:28.666237 kernel: audit: type=1103 audit(1746664828.617:315): pid=3498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:28.666317 kernel: audit: type=1006 audit(1746664828.617:316): pid=3498 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 May 8 00:40:28.617000 audit[3498]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd0ee07d0 a2=3 a3=0 items=0 ppid=1 pid=3498 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:28.617000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:40:28.672495 kernel: audit: type=1300 audit(1746664828.617:316): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd0ee07d0 a2=3 a3=0 items=0 ppid=1 pid=3498 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:28.672550 kernel: audit: type=1327 audit(1746664828.617:316): proctitle=737368643A20636F7265205B707269765D May 8 00:40:28.666000 audit[3498]: USER_START pid=3498 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:28.676918 kernel: audit: type=1105 audit(1746664828.666:317): pid=3498 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:28.676964 kernel: audit: type=1103 audit(1746664828.671:318): pid=3548 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:28.671000 audit[3548]: CRED_ACQ pid=3548 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:28.876566 sshd[3498]: pam_unix(sshd:session): session closed for user core May 8 00:40:28.876000 audit[3498]: USER_END pid=3498 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:28.879134 systemd[1]: sshd@12-10.0.0.16:22-10.0.0.1:51332.service: Deactivated successfully. May 8 00:40:28.880032 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:40:28.882555 systemd-logind[1294]: Session 13 logged out. Waiting for processes to exit. May 8 00:40:28.883418 systemd-logind[1294]: Removed session 13. May 8 00:40:28.903879 kernel: audit: type=1106 audit(1746664828.876:319): pid=3498 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:28.904256 env[1314]: time="2025-05-08T00:40:28.904178301Z" level=info msg="CreateContainer within sandbox \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5\"" May 8 00:40:28.876000 audit[3498]: CRED_DISP pid=3498 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:28.905274 env[1314]: time="2025-05-08T00:40:28.905221001Z" level=info msg="StartContainer for \"39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5\"" May 8 00:40:28.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.16:22-10.0.0.1:51332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:28.908853 kernel: audit: type=1104 audit(1746664828.876:320): pid=3498 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:29.143231 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:40:29.143404 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:40:29.177100 env[1314]: time="2025-05-08T00:40:29.177025224Z" level=info msg="StartContainer for \"39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5\" returns successfully" May 8 00:40:29.192033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5-rootfs.mount: Deactivated successfully. May 8 00:40:29.552599 kubelet[2259]: E0508 00:40:29.552562 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:29.713271 env[1314]: time="2025-05-08T00:40:29.713219385Z" level=info msg="shim disconnected" id=39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5 May 8 00:40:29.713271 env[1314]: time="2025-05-08T00:40:29.713267878Z" level=warning msg="cleaning up after shim disconnected" id=39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5 namespace=k8s.io May 8 00:40:29.713271 env[1314]: time="2025-05-08T00:40:29.713276664Z" level=info msg="cleaning up dead shim" May 8 00:40:29.713725 env[1314]: time="2025-05-08T00:40:29.713256355Z" level=error msg="ExecSync for \"39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"ab4ccd027770dca6e134b9a28dad57f893aecd147948e471c9774239a2c8803a\": task 39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5 not found: not found" May 8 00:40:29.713758 kubelet[2259]: E0508 00:40:29.713502 2259 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"ab4ccd027770dca6e134b9a28dad57f893aecd147948e471c9774239a2c8803a\": task 39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5 not found: not found" containerID="39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 8 00:40:29.714542 env[1314]: time="2025-05-08T00:40:29.714505511Z" level=error msg="ExecSync for \"39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5 not found: not found" May 8 00:40:29.714655 kubelet[2259]: E0508 00:40:29.714633 2259 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5 not found: not found" containerID="39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 8 00:40:29.715492 env[1314]: time="2025-05-08T00:40:29.715434364Z" level=error msg="ExecSync for \"39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5 not found: not found" May 8 00:40:29.715607 kubelet[2259]: E0508 00:40:29.715558 2259 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5 not found: not found" containerID="39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 8 00:40:29.719873 env[1314]: time="2025-05-08T00:40:29.719784837Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:40:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3621 runtime=io.containerd.runc.v2\n" May 8 00:40:30.411526 env[1314]: time="2025-05-08T00:40:30.411263927Z" level=info msg="StopPodSandbox for \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\"" May 8 00:40:30.439984 env[1314]: time="2025-05-08T00:40:30.439889698Z" level=error msg="StopPodSandbox for \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\" failed" error="failed to destroy network for sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:30.440281 kubelet[2259]: E0508 00:40:30.440221 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" May 8 00:40:30.440358 kubelet[2259]: E0508 00:40:30.440291 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97"} May 8 00:40:30.440358 kubelet[2259]: E0508 00:40:30.440328 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e9d7454a-993f-4132-8ced-f8cdba985c53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:30.440456 kubelet[2259]: E0508 00:40:30.440354 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e9d7454a-993f-4132-8ced-f8cdba985c53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-89lsx" podUID="e9d7454a-993f-4132-8ced-f8cdba985c53" May 8 00:40:30.561016 kubelet[2259]: I0508 00:40:30.560971 2259 scope.go:117] "RemoveContainer" containerID="39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5" May 8 00:40:30.561394 kubelet[2259]: E0508 00:40:30.561046 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:30.567608 env[1314]: time="2025-05-08T00:40:30.567556406Z" level=info msg="CreateContainer within sandbox \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" May 8 00:40:32.149346 env[1314]: time="2025-05-08T00:40:32.149102455Z" level=info msg="CreateContainer within sandbox \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0\"" May 8 00:40:32.150202 env[1314]: time="2025-05-08T00:40:32.149820200Z" level=info msg="StartContainer for \"c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0\"" May 8 00:40:32.511389 env[1314]: time="2025-05-08T00:40:32.511317516Z" level=info msg="StartContainer for \"c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0\" returns successfully" May 8 00:40:32.526178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0-rootfs.mount: Deactivated successfully. May 8 00:40:32.567098 kubelet[2259]: E0508 00:40:32.567062 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:32.780789 kubelet[2259]: I0508 00:40:32.780435 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mhdts" podStartSLOduration=5.842923455 podStartE2EDuration="35.780411586s" podCreationTimestamp="2025-05-08 00:39:57 +0000 UTC" firstStartedPulling="2025-05-08 00:39:57.43974915 +0000 UTC m=+22.126159892" lastFinishedPulling="2025-05-08 00:40:27.377237281 +0000 UTC m=+52.063648023" observedRunningTime="2025-05-08 00:40:29.648349889 +0000 UTC m=+54.334760661" watchObservedRunningTime="2025-05-08 00:40:32.780411586 +0000 UTC m=+57.466822348" May 8 00:40:32.988173 env[1314]: time="2025-05-08T00:40:32.988109500Z" level=error msg="ExecSync for \"c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"b847d6f955614cb99c0da1085b1d8988c77064ea16c2f0acd95ca5f6d4aaff0d\": task c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0 not found: not found" May 8 00:40:32.988452 env[1314]: time="2025-05-08T00:40:32.988140369Z" level=info msg="shim disconnected" id=c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0 May 8 00:40:32.988452 env[1314]: time="2025-05-08T00:40:32.988449000Z" level=warning msg="cleaning up after shim disconnected" id=c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0 namespace=k8s.io May 8 00:40:32.988452 env[1314]: time="2025-05-08T00:40:32.988462557Z" level=info msg="cleaning up dead shim" May 8 00:40:32.988753 kubelet[2259]: E0508 00:40:32.988398 2259 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"b847d6f955614cb99c0da1085b1d8988c77064ea16c2f0acd95ca5f6d4aaff0d\": task c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0 not found: not found" containerID="c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 8 00:40:32.989440 env[1314]: time="2025-05-08T00:40:32.989389192Z" level=error msg="ExecSync for \"c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0 not found: not found" May 8 00:40:32.989723 kubelet[2259]: E0508 00:40:32.989641 2259 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0 not found: not found" containerID="c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 8 00:40:32.990717 env[1314]: time="2025-05-08T00:40:32.990549375Z" level=error msg="ExecSync for \"c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0 not found: not found" May 8 00:40:32.990942 kubelet[2259]: E0508 00:40:32.990906 2259 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0 not found: not found" containerID="c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 8 00:40:32.994901 env[1314]: time="2025-05-08T00:40:32.994851436Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:40:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3709 runtime=io.containerd.runc.v2\n" May 8 00:40:33.570671 kubelet[2259]: I0508 00:40:33.570632 2259 scope.go:117] "RemoveContainer" containerID="39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5" May 8 00:40:33.571119 kubelet[2259]: I0508 00:40:33.571014 2259 scope.go:117] "RemoveContainer" containerID="c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0" May 8 00:40:33.571119 kubelet[2259]: E0508 00:40:33.571092 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:33.571591 kubelet[2259]: E0508 00:40:33.571551 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-mhdts_calico-system(8af932a1-2652-43b6-80a3-ba0182b9cf24)\"" pod="calico-system/calico-node-mhdts" podUID="8af932a1-2652-43b6-80a3-ba0182b9cf24" May 8 00:40:33.572097 env[1314]: time="2025-05-08T00:40:33.572039866Z" level=info msg="RemoveContainer for \"39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5\"" May 8 00:40:33.770125 env[1314]: time="2025-05-08T00:40:33.770062295Z" level=info msg="RemoveContainer for \"39fb427f52e9865a9b4a20e0bac1d5a2b407f3707482bbdaa556945afb3caae5\" returns successfully" May 8 00:40:33.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.16:22-10.0.0.1:51342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:33.880322 systemd[1]: Started sshd@13-10.0.0.16:22-10.0.0.1:51342.service. May 8 00:40:33.937596 kernel: kauditd_printk_skb: 1 callbacks suppressed May 8 00:40:33.937749 kernel: audit: type=1130 audit(1746664833.878:322): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.16:22-10.0.0.1:51342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:33.969000 audit[3723]: USER_ACCT pid=3723 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:33.971487 sshd[3723]: Accepted publickey for core from 10.0.0.1 port 51342 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:40:33.973000 audit[3723]: CRED_ACQ pid=3723 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:33.976020 sshd[3723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:40:33.979996 systemd-logind[1294]: New session 14 of user core. May 8 00:40:33.980321 systemd[1]: Started session-14.scope. May 8 00:40:33.996019 kernel: audit: type=1101 audit(1746664833.969:323): pid=3723 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:33.996095 kernel: audit: type=1103 audit(1746664833.973:324): pid=3723 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:33.996119 kernel: audit: type=1006 audit(1746664833.973:325): pid=3723 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 May 8 00:40:33.973000 audit[3723]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff6faf380 a2=3 a3=0 items=0 ppid=1 pid=3723 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:34.002476 kernel: audit: type=1300 audit(1746664833.973:325): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff6faf380 a2=3 a3=0 items=0 ppid=1 pid=3723 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:34.002539 kernel: audit: type=1327 audit(1746664833.973:325): proctitle=737368643A20636F7265205B707269765D May 8 00:40:33.973000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:40:34.003864 kernel: audit: type=1105 audit(1746664833.982:326): pid=3723 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:33.982000 audit[3723]: USER_START pid=3723 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:34.008115 kernel: audit: type=1103 audit(1746664833.984:327): pid=3726 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:33.984000 audit[3726]: CRED_ACQ pid=3726 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:34.163932 sshd[3723]: pam_unix(sshd:session): session closed for user core May 8 00:40:34.163000 audit[3723]: USER_END pid=3723 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:34.166291 systemd[1]: sshd@13-10.0.0.16:22-10.0.0.1:51342.service: Deactivated successfully. May 8 00:40:34.167495 systemd-logind[1294]: Session 14 logged out. Waiting for processes to exit. May 8 00:40:34.167509 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:40:34.168489 systemd-logind[1294]: Removed session 14. May 8 00:40:34.163000 audit[3723]: CRED_DISP pid=3723 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:34.176464 kernel: audit: type=1106 audit(1746664834.163:328): pid=3723 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:34.176550 kernel: audit: type=1104 audit(1746664834.163:329): pid=3723 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:34.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.16:22-10.0.0.1:51342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:34.575204 kubelet[2259]: I0508 00:40:34.575124 2259 scope.go:117] "RemoveContainer" containerID="c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0" May 8 00:40:34.575702 kubelet[2259]: E0508 00:40:34.575227 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:34.575702 kubelet[2259]: E0508 00:40:34.575611 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-mhdts_calico-system(8af932a1-2652-43b6-80a3-ba0182b9cf24)\"" pod="calico-system/calico-node-mhdts" podUID="8af932a1-2652-43b6-80a3-ba0182b9cf24" May 8 00:40:37.512706 kubelet[2259]: I0508 00:40:37.512642 2259 scope.go:117] "RemoveContainer" containerID="c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0" May 8 00:40:37.513165 kubelet[2259]: E0508 00:40:37.512758 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:37.513165 kubelet[2259]: E0508 00:40:37.513145 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-mhdts_calico-system(8af932a1-2652-43b6-80a3-ba0182b9cf24)\"" pod="calico-system/calico-node-mhdts" podUID="8af932a1-2652-43b6-80a3-ba0182b9cf24" May 8 00:40:38.411155 env[1314]: time="2025-05-08T00:40:38.411085981Z" level=info msg="StopPodSandbox for \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\"" May 8 00:40:38.443672 env[1314]: time="2025-05-08T00:40:38.443590031Z" level=error msg="StopPodSandbox for \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\" failed" error="failed to destroy network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:38.443975 kubelet[2259]: E0508 00:40:38.443898 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:40:38.444042 kubelet[2259]: E0508 00:40:38.443984 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195"} May 8 00:40:38.444042 kubelet[2259]: E0508 00:40:38.444022 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7ae0688-a473-448c-b8b9-7f2261bb0d9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:38.444142 kubelet[2259]: E0508 00:40:38.444046 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7ae0688-a473-448c-b8b9-7f2261bb0d9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655fb5665b-b5526" podUID="d7ae0688-a473-448c-b8b9-7f2261bb0d9a" May 8 00:40:39.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.16:22-10.0.0.1:56448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:39.168413 systemd[1]: Started sshd@14-10.0.0.16:22-10.0.0.1:56448.service. May 8 00:40:39.169776 kernel: kauditd_printk_skb: 1 callbacks suppressed May 8 00:40:39.169825 kernel: audit: type=1130 audit(1746664839.167:331): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.16:22-10.0.0.1:56448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:39.201000 audit[3763]: USER_ACCT pid=3763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.202958 sshd[3763]: Accepted publickey for core from 10.0.0.1 port 56448 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:40:39.205368 sshd[3763]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:40:39.208876 systemd-logind[1294]: New session 15 of user core. May 8 00:40:39.209369 systemd[1]: Started session-15.scope. May 8 00:40:39.204000 audit[3763]: CRED_ACQ pid=3763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.222598 kernel: audit: type=1101 audit(1746664839.201:332): pid=3763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.222652 kernel: audit: type=1103 audit(1746664839.204:333): pid=3763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.222673 kernel: audit: type=1006 audit(1746664839.204:334): pid=3763 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 May 8 00:40:39.204000 audit[3763]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe570ead0 a2=3 a3=0 items=0 ppid=1 pid=3763 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:39.241819 kernel: audit: type=1300 audit(1746664839.204:334): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe570ead0 a2=3 a3=0 items=0 ppid=1 pid=3763 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:39.241888 kernel: audit: type=1327 audit(1746664839.204:334): proctitle=737368643A20636F7265205B707269765D May 8 00:40:39.204000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:40:39.214000 audit[3763]: USER_START pid=3763 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.247783 kernel: audit: type=1105 audit(1746664839.214:335): pid=3763 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.247817 kernel: audit: type=1103 audit(1746664839.215:336): pid=3766 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.215000 audit[3766]: CRED_ACQ pid=3766 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.349104 sshd[3763]: pam_unix(sshd:session): session closed for user core May 8 00:40:39.353045 systemd[1]: Started sshd@15-10.0.0.16:22-10.0.0.1:56456.service. May 8 00:40:39.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.16:22-10.0.0.1:56456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:39.357000 audit[3763]: USER_END pid=3763 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.363337 kernel: audit: type=1130 audit(1746664839.352:337): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.16:22-10.0.0.1:56456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:39.363416 kernel: audit: type=1106 audit(1746664839.357:338): pid=3763 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.357000 audit[3763]: CRED_DISP pid=3763 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.16:22-10.0.0.1:56448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:39.362400 systemd[1]: sshd@14-10.0.0.16:22-10.0.0.1:56448.service: Deactivated successfully. May 8 00:40:39.363444 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:40:39.363684 systemd-logind[1294]: Session 15 logged out. Waiting for processes to exit. May 8 00:40:39.364364 systemd-logind[1294]: Removed session 15. May 8 00:40:39.385000 audit[3776]: USER_ACCT pid=3776 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.386286 sshd[3776]: Accepted publickey for core from 10.0.0.1 port 56456 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:40:39.387523 sshd[3776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:40:39.386000 audit[3776]: CRED_ACQ pid=3776 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.386000 audit[3776]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3116f680 a2=3 a3=0 items=0 ppid=1 pid=3776 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:39.386000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:40:39.391609 systemd-logind[1294]: New session 16 of user core. May 8 00:40:39.392693 systemd[1]: Started session-16.scope. May 8 00:40:39.396000 audit[3776]: USER_START pid=3776 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.398000 audit[3781]: CRED_ACQ pid=3781 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.411194 env[1314]: time="2025-05-08T00:40:39.411126867Z" level=info msg="StopPodSandbox for \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\"" May 8 00:40:39.437674 env[1314]: time="2025-05-08T00:40:39.437532392Z" level=error msg="StopPodSandbox for \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\" failed" error="failed to destroy network for sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:39.437826 kubelet[2259]: E0508 00:40:39.437773 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" May 8 00:40:39.437826 kubelet[2259]: E0508 00:40:39.437831 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a"} May 8 00:40:39.438158 kubelet[2259]: E0508 00:40:39.437879 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"35415a0b-9f3d-4f12-b555-b4c08d155deb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:39.438158 kubelet[2259]: E0508 00:40:39.437903 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"35415a0b-9f3d-4f12-b555-b4c08d155deb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xrfkq" podUID="35415a0b-9f3d-4f12-b555-b4c08d155deb" May 8 00:40:39.545241 sshd[3776]: pam_unix(sshd:session): session closed for user core May 8 00:40:39.545000 audit[3776]: USER_END pid=3776 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.545000 audit[3776]: CRED_DISP pid=3776 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.16:22-10.0.0.1:56466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:39.548685 systemd[1]: Started sshd@16-10.0.0.16:22-10.0.0.1:56466.service. May 8 00:40:39.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.16:22-10.0.0.1:56456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:39.549971 systemd[1]: sshd@15-10.0.0.16:22-10.0.0.1:56456.service: Deactivated successfully. May 8 00:40:39.551425 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:40:39.552666 systemd-logind[1294]: Session 16 logged out. Waiting for processes to exit. May 8 00:40:39.556894 systemd-logind[1294]: Removed session 16. May 8 00:40:39.583000 audit[3814]: USER_ACCT pid=3814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.584207 sshd[3814]: Accepted publickey for core from 10.0.0.1 port 56466 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:40:39.584000 audit[3814]: CRED_ACQ pid=3814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.584000 audit[3814]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe2939030 a2=3 a3=0 items=0 ppid=1 pid=3814 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:39.584000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:40:39.585855 sshd[3814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:40:39.591442 systemd-logind[1294]: New session 17 of user core. May 8 00:40:39.591875 systemd[1]: Started session-17.scope. May 8 00:40:39.597000 audit[3814]: USER_START pid=3814 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.598000 audit[3818]: CRED_ACQ pid=3818 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.736416 sshd[3814]: pam_unix(sshd:session): session closed for user core May 8 00:40:39.736000 audit[3814]: USER_END pid=3814 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.736000 audit[3814]: CRED_DISP pid=3814 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:39.738952 systemd[1]: sshd@16-10.0.0.16:22-10.0.0.1:56466.service: Deactivated successfully. May 8 00:40:39.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.16:22-10.0.0.1:56466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:39.739935 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:40:39.740026 systemd-logind[1294]: Session 17 logged out. Waiting for processes to exit. May 8 00:40:39.740753 systemd-logind[1294]: Removed session 17. May 8 00:40:41.411699 env[1314]: time="2025-05-08T00:40:41.411634480Z" level=info msg="StopPodSandbox for \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\"" May 8 00:40:41.458309 env[1314]: time="2025-05-08T00:40:41.458214752Z" level=error msg="StopPodSandbox for \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\" failed" error="failed to destroy network for sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:41.458628 kubelet[2259]: E0508 00:40:41.458550 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" May 8 00:40:41.458990 kubelet[2259]: E0508 00:40:41.458638 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d"} May 8 00:40:41.458990 kubelet[2259]: E0508 00:40:41.458692 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3dd70705-8c14-4d08-9f87-66c93e2ace47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:41.458990 kubelet[2259]: E0508 00:40:41.458722 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3dd70705-8c14-4d08-9f87-66c93e2ace47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655fb5665b-8tf24" podUID="3dd70705-8c14-4d08-9f87-66c93e2ace47" May 8 00:40:42.411502 env[1314]: time="2025-05-08T00:40:42.411459427Z" level=info msg="StopPodSandbox for \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\"" May 8 00:40:42.411735 env[1314]: time="2025-05-08T00:40:42.411452725Z" level=info msg="StopPodSandbox for \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\"" May 8 00:40:42.437555 env[1314]: time="2025-05-08T00:40:42.437474825Z" level=error msg="StopPodSandbox for \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\" failed" error="failed to destroy network for sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:42.437972 kubelet[2259]: E0508 00:40:42.437738 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" May 8 00:40:42.437972 kubelet[2259]: E0508 00:40:42.437796 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285"} May 8 00:40:42.437972 kubelet[2259]: E0508 00:40:42.437829 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5444d20d-8a4f-4e35-a777-fef99f439552\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:42.437972 kubelet[2259]: E0508 00:40:42.437865 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5444d20d-8a4f-4e35-a777-fef99f439552\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-575f4bf5b7-jhlnt" podUID="5444d20d-8a4f-4e35-a777-fef99f439552" May 8 00:40:42.448090 env[1314]: time="2025-05-08T00:40:42.448023940Z" level=error msg="StopPodSandbox for \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\" failed" error="failed to destroy network for sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:42.448364 kubelet[2259]: E0508 00:40:42.448300 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" May 8 00:40:42.448537 kubelet[2259]: E0508 00:40:42.448372 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688"} May 8 00:40:42.448537 kubelet[2259]: E0508 00:40:42.448411 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a1c58f86-7966-473c-98f3-e00538745ae1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:42.448537 kubelet[2259]: E0508 00:40:42.448434 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a1c58f86-7966-473c-98f3-e00538745ae1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rrhhb" podUID="a1c58f86-7966-473c-98f3-e00538745ae1" May 8 00:40:44.411349 env[1314]: time="2025-05-08T00:40:44.411300825Z" level=info msg="StopPodSandbox for \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\"" May 8 00:40:44.436640 env[1314]: time="2025-05-08T00:40:44.436546747Z" level=error msg="StopPodSandbox for \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\" failed" error="failed to destroy network for sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:44.436914 kubelet[2259]: E0508 00:40:44.436858 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" May 8 00:40:44.437181 kubelet[2259]: E0508 00:40:44.436927 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97"} May 8 00:40:44.437181 kubelet[2259]: E0508 00:40:44.436966 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e9d7454a-993f-4132-8ced-f8cdba985c53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:44.437181 kubelet[2259]: E0508 00:40:44.436994 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e9d7454a-993f-4132-8ced-f8cdba985c53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-89lsx" podUID="e9d7454a-993f-4132-8ced-f8cdba985c53" May 8 00:40:44.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.16:22-10.0.0.1:56478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:44.739712 systemd[1]: Started sshd@17-10.0.0.16:22-10.0.0.1:56478.service. May 8 00:40:44.800001 kernel: kauditd_printk_skb: 23 callbacks suppressed May 8 00:40:44.800155 kernel: audit: type=1130 audit(1746664844.739:358): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.16:22-10.0.0.1:56478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:44.825000 audit[3924]: USER_ACCT pid=3924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:44.826303 sshd[3924]: Accepted publickey for core from 10.0.0.1 port 56478 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:40:44.828211 sshd[3924]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:40:44.826000 audit[3924]: CRED_ACQ pid=3924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:44.832709 systemd-logind[1294]: New session 18 of user core. May 8 00:40:44.832914 systemd[1]: Started session-18.scope. May 8 00:40:44.833911 kernel: audit: type=1101 audit(1746664844.825:359): pid=3924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:44.833963 kernel: audit: type=1103 audit(1746664844.826:360): pid=3924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:44.833987 kernel: audit: type=1006 audit(1746664844.827:361): pid=3924 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 May 8 00:40:44.827000 audit[3924]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6c7a5e00 a2=3 a3=0 items=0 ppid=1 pid=3924 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:44.840690 kernel: audit: type=1300 audit(1746664844.827:361): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6c7a5e00 a2=3 a3=0 items=0 ppid=1 pid=3924 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:44.840750 kernel: audit: type=1327 audit(1746664844.827:361): proctitle=737368643A20636F7265205B707269765D May 8 00:40:44.827000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:40:44.842034 kernel: audit: type=1105 audit(1746664844.837:362): pid=3924 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:44.837000 audit[3924]: USER_START pid=3924 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:44.838000 audit[3927]: CRED_ACQ pid=3927 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:44.849617 kernel: audit: type=1103 audit(1746664844.838:363): pid=3927 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:44.932983 sshd[3924]: pam_unix(sshd:session): session closed for user core May 8 00:40:44.933000 audit[3924]: USER_END pid=3924 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:44.935498 systemd[1]: sshd@17-10.0.0.16:22-10.0.0.1:56478.service: Deactivated successfully. May 8 00:40:44.936832 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:40:44.936997 systemd-logind[1294]: Session 18 logged out. Waiting for processes to exit. May 8 00:40:44.938144 systemd-logind[1294]: Removed session 18. May 8 00:40:44.933000 audit[3924]: CRED_DISP pid=3924 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:44.941674 kernel: audit: type=1106 audit(1746664844.933:364): pid=3924 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:44.941734 kernel: audit: type=1104 audit(1746664844.933:365): pid=3924 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:44.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.16:22-10.0.0.1:56478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:48.410232 kubelet[2259]: I0508 00:40:48.410185 2259 scope.go:117] "RemoveContainer" containerID="c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0" May 8 00:40:48.410797 kubelet[2259]: E0508 00:40:48.410766 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:48.413555 env[1314]: time="2025-05-08T00:40:48.413514675Z" level=info msg="CreateContainer within sandbox \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" May 8 00:40:49.411226 env[1314]: time="2025-05-08T00:40:49.411162361Z" level=info msg="StopPodSandbox for \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\"" May 8 00:40:49.505256 env[1314]: time="2025-05-08T00:40:49.505182979Z" level=error msg="StopPodSandbox for \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\" failed" error="failed to destroy network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.505668 kubelet[2259]: E0508 00:40:49.505488 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:40:49.505668 kubelet[2259]: E0508 00:40:49.505553 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195"} May 8 00:40:49.505668 kubelet[2259]: E0508 00:40:49.505588 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7ae0688-a473-448c-b8b9-7f2261bb0d9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:49.505668 kubelet[2259]: E0508 00:40:49.505612 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7ae0688-a473-448c-b8b9-7f2261bb0d9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655fb5665b-b5526" podUID="d7ae0688-a473-448c-b8b9-7f2261bb0d9a" May 8 00:40:49.520536 env[1314]: time="2025-05-08T00:40:49.520504175Z" level=info msg="CreateContainer within sandbox \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52\"" May 8 00:40:49.521432 env[1314]: time="2025-05-08T00:40:49.521043223Z" level=info msg="StartContainer for \"29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52\"" May 8 00:40:49.930925 env[1314]: time="2025-05-08T00:40:49.930867736Z" level=info msg="StartContainer for \"29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52\" returns successfully" May 8 00:40:49.935760 kubelet[2259]: E0508 00:40:49.934916 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:49.936436 systemd[1]: Started sshd@18-10.0.0.16:22-10.0.0.1:53464.service. May 8 00:40:49.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.16:22-10.0.0.1:53464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:49.946126 env[1314]: time="2025-05-08T00:40:49.946052742Z" level=error msg="ExecSync for \"29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52\" failed" error="failed to exec in container: failed to create exec \"0cbb3a1b749130e9a3cf2fe0e19e524d730169d3c674b7b7e02470f99a23a2d8\": cannot exec in a stopped state: unknown" May 8 00:40:49.946356 kubelet[2259]: E0508 00:40:49.946298 2259 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"0cbb3a1b749130e9a3cf2fe0e19e524d730169d3c674b7b7e02470f99a23a2d8\": cannot exec in a stopped state: unknown" containerID="29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 8 00:40:49.952436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52-rootfs.mount: Deactivated successfully. May 8 00:40:49.961528 kernel: kauditd_printk_skb: 1 callbacks suppressed May 8 00:40:49.961666 kernel: audit: type=1130 audit(1746664849.935:367): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.16:22-10.0.0.1:53464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:50.180000 audit[4007]: USER_ACCT pid=4007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:50.181964 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 53464 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:40:50.184299 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:40:50.198705 systemd-logind[1294]: New session 19 of user core. May 8 00:40:50.199047 systemd[1]: Started session-19.scope. May 8 00:40:50.183000 audit[4007]: CRED_ACQ pid=4007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:50.204552 kernel: audit: type=1101 audit(1746664850.180:368): pid=4007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:50.204668 kernel: audit: type=1103 audit(1746664850.183:369): pid=4007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:50.204712 kernel: audit: type=1006 audit(1746664850.183:370): pid=4007 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 May 8 00:40:50.183000 audit[4007]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd75ef23b0 a2=3 a3=0 items=0 ppid=1 pid=4007 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:50.211349 kernel: audit: type=1300 audit(1746664850.183:370): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd75ef23b0 a2=3 a3=0 items=0 ppid=1 pid=4007 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:50.211402 kernel: audit: type=1327 audit(1746664850.183:370): proctitle=737368643A20636F7265205B707269765D May 8 00:40:50.183000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:40:50.203000 audit[4007]: USER_START pid=4007 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:50.217620 kernel: audit: type=1105 audit(1746664850.203:371): pid=4007 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:50.205000 audit[4016]: CRED_ACQ pid=4016 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:50.237299 kernel: audit: type=1103 audit(1746664850.205:372): pid=4016 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:50.258322 env[1314]: time="2025-05-08T00:40:50.258247020Z" level=info msg="shim disconnected" id=29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52 May 8 00:40:50.258322 env[1314]: time="2025-05-08T00:40:50.258316673Z" level=warning msg="cleaning up after shim disconnected" id=29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52 namespace=k8s.io May 8 00:40:50.258322 env[1314]: time="2025-05-08T00:40:50.258325289Z" level=info msg="cleaning up dead shim" May 8 00:40:50.261026 env[1314]: time="2025-05-08T00:40:50.260971292Z" level=error msg="ExecSync for \"29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"12722a204cbfbf1a74493d2eabb27693d8f25754945672eeab1cc4a5da8c6d12\": task 29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52 not found: not found" May 8 00:40:50.261382 kubelet[2259]: E0508 00:40:50.261317 2259 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"12722a204cbfbf1a74493d2eabb27693d8f25754945672eeab1cc4a5da8c6d12\": task 29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52 not found: not found" containerID="29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 8 00:40:50.263887 env[1314]: time="2025-05-08T00:40:50.263814301Z" level=error msg="ExecSync for \"29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52 not found: not found" May 8 00:40:50.264276 kubelet[2259]: E0508 00:40:50.264204 2259 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52 not found: not found" containerID="29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 8 00:40:50.274308 env[1314]: time="2025-05-08T00:40:50.274230022Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:40:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4024 runtime=io.containerd.runc.v2\n" May 8 00:40:50.317559 sshd[4007]: pam_unix(sshd:session): session closed for user core May 8 00:40:50.317000 audit[4007]: USER_END pid=4007 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:50.320469 systemd[1]: sshd@18-10.0.0.16:22-10.0.0.1:53464.service: Deactivated successfully. May 8 00:40:50.321417 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:40:50.317000 audit[4007]: CRED_DISP pid=4007 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:50.326040 systemd-logind[1294]: Session 19 logged out. Waiting for processes to exit. May 8 00:40:50.326883 systemd-logind[1294]: Removed session 19. May 8 00:40:50.328181 kernel: audit: type=1106 audit(1746664850.317:373): pid=4007 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:50.328235 kernel: audit: type=1104 audit(1746664850.317:374): pid=4007 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:50.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.16:22-10.0.0.1:53464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:50.411073 kubelet[2259]: E0508 00:40:50.411017 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:50.939030 kubelet[2259]: I0508 00:40:50.938993 2259 scope.go:117] "RemoveContainer" containerID="c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0" May 8 00:40:50.939497 kubelet[2259]: I0508 00:40:50.939279 2259 scope.go:117] "RemoveContainer" containerID="29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52" May 8 00:40:50.939497 kubelet[2259]: E0508 00:40:50.939348 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:50.939759 kubelet[2259]: E0508 00:40:50.939733 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-mhdts_calico-system(8af932a1-2652-43b6-80a3-ba0182b9cf24)\"" pod="calico-system/calico-node-mhdts" podUID="8af932a1-2652-43b6-80a3-ba0182b9cf24" May 8 00:40:50.940192 env[1314]: time="2025-05-08T00:40:50.940159243Z" level=info msg="RemoveContainer for \"c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0\"" May 8 00:40:51.122128 env[1314]: time="2025-05-08T00:40:51.122078245Z" level=info msg="RemoveContainer for \"c49770bb7d3321ad2ee3d57de77aee973b4f450a022e619acd2bba8303003ff0\" returns successfully" May 8 00:40:51.411320 kubelet[2259]: E0508 00:40:51.411283 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:51.942473 kubelet[2259]: I0508 00:40:51.942417 2259 scope.go:117] "RemoveContainer" containerID="29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52" May 8 00:40:51.942925 kubelet[2259]: E0508 00:40:51.942591 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:51.943150 kubelet[2259]: E0508 00:40:51.943126 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-mhdts_calico-system(8af932a1-2652-43b6-80a3-ba0182b9cf24)\"" pod="calico-system/calico-node-mhdts" podUID="8af932a1-2652-43b6-80a3-ba0182b9cf24" May 8 00:40:53.410553 kubelet[2259]: E0508 00:40:53.410499 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:53.411370 env[1314]: time="2025-05-08T00:40:53.411310766Z" level=info msg="StopPodSandbox for \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\"" May 8 00:40:53.435788 env[1314]: time="2025-05-08T00:40:53.435690061Z" level=error msg="StopPodSandbox for \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\" failed" error="failed to destroy network for sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:53.436037 kubelet[2259]: E0508 00:40:53.435974 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" May 8 00:40:53.436093 kubelet[2259]: E0508 00:40:53.436035 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a"} May 8 00:40:53.436093 kubelet[2259]: E0508 00:40:53.436070 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"35415a0b-9f3d-4f12-b555-b4c08d155deb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:53.436200 kubelet[2259]: E0508 00:40:53.436096 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"35415a0b-9f3d-4f12-b555-b4c08d155deb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xrfkq" podUID="35415a0b-9f3d-4f12-b555-b4c08d155deb" May 8 00:40:55.322109 systemd[1]: Started sshd@19-10.0.0.16:22-10.0.0.1:35778.service. May 8 00:40:55.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.16:22-10.0.0.1:35778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:55.323601 kernel: kauditd_printk_skb: 1 callbacks suppressed May 8 00:40:55.323696 kernel: audit: type=1130 audit(1746664855.321:376): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.16:22-10.0.0.1:35778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:55.353000 audit[4064]: USER_ACCT pid=4064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:55.354260 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 35778 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:40:55.356545 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:40:55.355000 audit[4064]: CRED_ACQ pid=4064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:55.361274 systemd-logind[1294]: New session 20 of user core. May 8 00:40:55.362328 systemd[1]: Started session-20.scope. May 8 00:40:55.364173 kernel: audit: type=1101 audit(1746664855.353:377): pid=4064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:55.364301 kernel: audit: type=1103 audit(1746664855.355:378): pid=4064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:55.367478 kernel: audit: type=1006 audit(1746664855.355:379): pid=4064 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 May 8 00:40:55.355000 audit[4064]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff03850200 a2=3 a3=0 items=0 ppid=1 pid=4064 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:55.372569 kernel: audit: type=1300 audit(1746664855.355:379): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff03850200 a2=3 a3=0 items=0 ppid=1 pid=4064 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:40:55.372674 kernel: audit: type=1327 audit(1746664855.355:379): proctitle=737368643A20636F7265205B707269765D May 8 00:40:55.355000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:40:55.368000 audit[4064]: USER_START pid=4064 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:55.378784 kernel: audit: type=1105 audit(1746664855.368:380): pid=4064 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:55.378877 kernel: audit: type=1103 audit(1746664855.370:381): pid=4067 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:55.370000 audit[4067]: CRED_ACQ pid=4067 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:55.411188 env[1314]: time="2025-05-08T00:40:55.411123814Z" level=info msg="StopPodSandbox for \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\"" May 8 00:40:55.445689 env[1314]: time="2025-05-08T00:40:55.445603412Z" level=error msg="StopPodSandbox for \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\" failed" error="failed to destroy network for sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:55.445982 kubelet[2259]: E0508 00:40:55.445914 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" May 8 00:40:55.446323 kubelet[2259]: E0508 00:40:55.445983 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688"} May 8 00:40:55.446323 kubelet[2259]: E0508 00:40:55.446024 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a1c58f86-7966-473c-98f3-e00538745ae1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:55.446323 kubelet[2259]: E0508 00:40:55.446046 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a1c58f86-7966-473c-98f3-e00538745ae1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rrhhb" podUID="a1c58f86-7966-473c-98f3-e00538745ae1" May 8 00:40:55.488002 sshd[4064]: pam_unix(sshd:session): session closed for user core May 8 00:40:55.488000 audit[4064]: USER_END pid=4064 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:55.490925 systemd[1]: sshd@19-10.0.0.16:22-10.0.0.1:35778.service: Deactivated successfully. May 8 00:40:55.491991 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:40:55.488000 audit[4064]: CRED_DISP pid=4064 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:55.493213 systemd-logind[1294]: Session 20 logged out. Waiting for processes to exit. May 8 00:40:55.494121 systemd-logind[1294]: Removed session 20. May 8 00:40:55.496910 kernel: audit: type=1106 audit(1746664855.488:382): pid=4064 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:55.496970 kernel: audit: type=1104 audit(1746664855.488:383): pid=4064 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:40:55.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.16:22-10.0.0.1:35778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:40:56.411058 env[1314]: time="2025-05-08T00:40:56.410986730Z" level=info msg="StopPodSandbox for \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\"" May 8 00:40:56.411275 env[1314]: time="2025-05-08T00:40:56.411217930Z" level=info msg="StopPodSandbox for \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\"" May 8 00:40:56.411702 env[1314]: time="2025-05-08T00:40:56.411013651Z" level=info msg="StopPodSandbox for \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\"" May 8 00:40:56.441425 env[1314]: time="2025-05-08T00:40:56.441353046Z" level=error msg="StopPodSandbox for \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\" failed" error="failed to destroy network for sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:56.441712 kubelet[2259]: E0508 00:40:56.441620 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" May 8 00:40:56.441712 kubelet[2259]: E0508 00:40:56.441690 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d"} May 8 00:40:56.441827 kubelet[2259]: E0508 00:40:56.441727 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3dd70705-8c14-4d08-9f87-66c93e2ace47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:56.441827 kubelet[2259]: E0508 00:40:56.441757 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3dd70705-8c14-4d08-9f87-66c93e2ace47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655fb5665b-8tf24" podUID="3dd70705-8c14-4d08-9f87-66c93e2ace47" May 8 00:40:56.442178 env[1314]: time="2025-05-08T00:40:56.442107242Z" level=error msg="StopPodSandbox for \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\" failed" error="failed to destroy network for sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:56.442561 kubelet[2259]: E0508 00:40:56.442521 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" May 8 00:40:56.442561 kubelet[2259]: E0508 00:40:56.442555 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97"} May 8 00:40:56.442690 kubelet[2259]: E0508 00:40:56.442580 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e9d7454a-993f-4132-8ced-f8cdba985c53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:56.442690 kubelet[2259]: E0508 00:40:56.442597 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e9d7454a-993f-4132-8ced-f8cdba985c53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-89lsx" podUID="e9d7454a-993f-4132-8ced-f8cdba985c53" May 8 00:40:56.450701 env[1314]: time="2025-05-08T00:40:56.450641749Z" level=error msg="StopPodSandbox for \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\" failed" error="failed to destroy network for sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:56.450858 kubelet[2259]: E0508 00:40:56.450813 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" May 8 00:40:56.451128 kubelet[2259]: E0508 00:40:56.450868 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285"} May 8 00:40:56.451128 kubelet[2259]: E0508 00:40:56.450896 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5444d20d-8a4f-4e35-a777-fef99f439552\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:56.451128 kubelet[2259]: E0508 00:40:56.450915 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5444d20d-8a4f-4e35-a777-fef99f439552\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-575f4bf5b7-jhlnt" podUID="5444d20d-8a4f-4e35-a777-fef99f439552" May 8 00:40:57.775780 env[1314]: time="2025-05-08T00:40:57.775716003Z" level=info msg="StopPodSandbox for \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\"" May 8 00:40:57.776366 env[1314]: time="2025-05-08T00:40:57.775798290Z" level=info msg="Container to stop \"29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:40:57.776366 env[1314]: time="2025-05-08T00:40:57.775813159Z" level=info msg="Container to stop \"617f3739a38e4624f40ba2d60fb1e4038a586d1d1f4555f3db6f387b2a1a43dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:40:57.776366 env[1314]: time="2025-05-08T00:40:57.775823608Z" level=info msg="Container to stop \"89534fd92ac31e82907bb5ecfb72d84d7bbb26a7ed2a0bb1e61e552122a69321\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:40:57.778994 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb-shm.mount: Deactivated successfully. May 8 00:40:57.822413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb-rootfs.mount: Deactivated successfully. May 8 00:40:57.840066 env[1314]: time="2025-05-08T00:40:57.839987201Z" level=info msg="shim disconnected" id=b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb May 8 00:40:57.840066 env[1314]: time="2025-05-08T00:40:57.840055221Z" level=warning msg="cleaning up after shim disconnected" id=b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb namespace=k8s.io May 8 00:40:57.840066 env[1314]: time="2025-05-08T00:40:57.840068466Z" level=info msg="cleaning up dead shim" May 8 00:40:57.857337 env[1314]: time="2025-05-08T00:40:57.857279835Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:40:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4196 runtime=io.containerd.runc.v2\n" May 8 00:40:57.857901 env[1314]: time="2025-05-08T00:40:57.857870188Z" level=info msg="TearDown network for sandbox \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\" successfully" May 8 00:40:57.858009 env[1314]: time="2025-05-08T00:40:57.857989294Z" level=info msg="StopPodSandbox for \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\" returns successfully" May 8 00:40:57.874905 kubelet[2259]: I0508 00:40:57.874303 2259 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8af932a1-2652-43b6-80a3-ba0182b9cf24-tigera-ca-bundle\") pod \"8af932a1-2652-43b6-80a3-ba0182b9cf24\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " May 8 00:40:57.874905 kubelet[2259]: I0508 00:40:57.874703 2259 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-var-run-calico\") pod \"8af932a1-2652-43b6-80a3-ba0182b9cf24\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " May 8 00:40:57.874905 kubelet[2259]: I0508 00:40:57.874726 2259 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-var-lib-calico\") pod \"8af932a1-2652-43b6-80a3-ba0182b9cf24\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " May 8 00:40:57.874905 kubelet[2259]: I0508 00:40:57.874742 2259 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-cni-bin-dir\") pod \"8af932a1-2652-43b6-80a3-ba0182b9cf24\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " May 8 00:40:57.874905 kubelet[2259]: I0508 00:40:57.874761 2259 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-cni-net-dir\") pod \"8af932a1-2652-43b6-80a3-ba0182b9cf24\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " May 8 00:40:57.874905 kubelet[2259]: I0508 00:40:57.874792 2259 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8af932a1-2652-43b6-80a3-ba0182b9cf24-node-certs\") pod \"8af932a1-2652-43b6-80a3-ba0182b9cf24\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " May 8 00:40:57.875619 kubelet[2259]: I0508 00:40:57.874810 2259 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-lib-modules\") pod \"8af932a1-2652-43b6-80a3-ba0182b9cf24\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " May 8 00:40:57.875619 kubelet[2259]: I0508 00:40:57.874829 2259 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-flexvol-driver-host\") pod \"8af932a1-2652-43b6-80a3-ba0182b9cf24\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " May 8 00:40:57.876480 kubelet[2259]: I0508 00:40:57.875820 2259 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "8af932a1-2652-43b6-80a3-ba0182b9cf24" (UID: "8af932a1-2652-43b6-80a3-ba0182b9cf24"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:40:57.876480 kubelet[2259]: I0508 00:40:57.875899 2259 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "8af932a1-2652-43b6-80a3-ba0182b9cf24" (UID: "8af932a1-2652-43b6-80a3-ba0182b9cf24"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:40:57.876480 kubelet[2259]: I0508 00:40:57.875922 2259 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "8af932a1-2652-43b6-80a3-ba0182b9cf24" (UID: "8af932a1-2652-43b6-80a3-ba0182b9cf24"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:40:57.876480 kubelet[2259]: I0508 00:40:57.876239 2259 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "8af932a1-2652-43b6-80a3-ba0182b9cf24" (UID: "8af932a1-2652-43b6-80a3-ba0182b9cf24"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:40:57.876480 kubelet[2259]: I0508 00:40:57.876269 2259 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8af932a1-2652-43b6-80a3-ba0182b9cf24" (UID: "8af932a1-2652-43b6-80a3-ba0182b9cf24"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:40:57.876724 kubelet[2259]: I0508 00:40:57.876708 2259 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "8af932a1-2652-43b6-80a3-ba0182b9cf24" (UID: "8af932a1-2652-43b6-80a3-ba0182b9cf24"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:40:57.876864 kubelet[2259]: I0508 00:40:57.876831 2259 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-policysync\") pod \"8af932a1-2652-43b6-80a3-ba0182b9cf24\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " May 8 00:40:57.876970 kubelet[2259]: I0508 00:40:57.876953 2259 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-xtables-lock\") pod \"8af932a1-2652-43b6-80a3-ba0182b9cf24\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " May 8 00:40:57.877076 kubelet[2259]: I0508 00:40:57.877060 2259 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-cni-log-dir\") pod \"8af932a1-2652-43b6-80a3-ba0182b9cf24\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " May 8 00:40:57.877182 kubelet[2259]: I0508 00:40:57.877165 2259 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fxst\" (UniqueName: \"kubernetes.io/projected/8af932a1-2652-43b6-80a3-ba0182b9cf24-kube-api-access-7fxst\") pod \"8af932a1-2652-43b6-80a3-ba0182b9cf24\" (UID: \"8af932a1-2652-43b6-80a3-ba0182b9cf24\") " May 8 00:40:57.877311 kubelet[2259]: I0508 00:40:57.877293 2259 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-var-run-calico\") on node \"localhost\" DevicePath \"\"" May 8 00:40:57.877396 kubelet[2259]: I0508 00:40:57.877381 2259 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-var-lib-calico\") on node \"localhost\" DevicePath \"\"" May 8 00:40:57.877487 kubelet[2259]: I0508 00:40:57.877471 2259 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" May 8 00:40:57.877585 kubelet[2259]: I0508 00:40:57.877571 2259 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-cni-net-dir\") on node \"localhost\" DevicePath \"\"" May 8 00:40:57.877689 kubelet[2259]: I0508 00:40:57.877672 2259 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:40:57.877775 kubelet[2259]: I0508 00:40:57.877759 2259 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" May 8 00:40:57.877881 kubelet[2259]: I0508 00:40:57.876933 2259 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-policysync" (OuterVolumeSpecName: "policysync") pod "8af932a1-2652-43b6-80a3-ba0182b9cf24" (UID: "8af932a1-2652-43b6-80a3-ba0182b9cf24"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:40:57.877984 kubelet[2259]: I0508 00:40:57.876996 2259 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8af932a1-2652-43b6-80a3-ba0182b9cf24" (UID: "8af932a1-2652-43b6-80a3-ba0182b9cf24"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:40:57.878065 kubelet[2259]: I0508 00:40:57.877132 2259 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "8af932a1-2652-43b6-80a3-ba0182b9cf24" (UID: "8af932a1-2652-43b6-80a3-ba0182b9cf24"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:40:57.882173 systemd[1]: var-lib-kubelet-pods-8af932a1\x2d2652\x2d43b6\x2d80a3\x2dba0182b9cf24-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 8 00:40:57.892622 kubelet[2259]: I0508 00:40:57.884006 2259 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af932a1-2652-43b6-80a3-ba0182b9cf24-node-certs" (OuterVolumeSpecName: "node-certs") pod "8af932a1-2652-43b6-80a3-ba0182b9cf24" (UID: "8af932a1-2652-43b6-80a3-ba0182b9cf24"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:40:57.892622 kubelet[2259]: I0508 00:40:57.892406 2259 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8af932a1-2652-43b6-80a3-ba0182b9cf24-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "8af932a1-2652-43b6-80a3-ba0182b9cf24" (UID: "8af932a1-2652-43b6-80a3-ba0182b9cf24"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:40:57.886985 systemd[1]: var-lib-kubelet-pods-8af932a1\x2d2652\x2d43b6\x2d80a3\x2dba0182b9cf24-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 8 00:40:57.896717 systemd[1]: var-lib-kubelet-pods-8af932a1\x2d2652\x2d43b6\x2d80a3\x2dba0182b9cf24-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7fxst.mount: Deactivated successfully. May 8 00:40:57.899829 kubelet[2259]: I0508 00:40:57.899775 2259 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af932a1-2652-43b6-80a3-ba0182b9cf24-kube-api-access-7fxst" (OuterVolumeSpecName: "kube-api-access-7fxst") pod "8af932a1-2652-43b6-80a3-ba0182b9cf24" (UID: "8af932a1-2652-43b6-80a3-ba0182b9cf24"). InnerVolumeSpecName "kube-api-access-7fxst". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:40:57.925996 kubelet[2259]: I0508 00:40:57.925937 2259 topology_manager.go:215] "Topology Admit Handler" podUID="045c5203-2b23-4a9c-9e1a-cbe5fb002068" podNamespace="calico-system" podName="calico-node-7pk8d" May 8 00:40:57.926323 kubelet[2259]: E0508 00:40:57.926299 2259 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8af932a1-2652-43b6-80a3-ba0182b9cf24" containerName="flexvol-driver" May 8 00:40:57.926323 kubelet[2259]: E0508 00:40:57.926321 2259 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8af932a1-2652-43b6-80a3-ba0182b9cf24" containerName="install-cni" May 8 00:40:57.926405 kubelet[2259]: E0508 00:40:57.926331 2259 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8af932a1-2652-43b6-80a3-ba0182b9cf24" containerName="calico-node" May 8 00:40:57.926405 kubelet[2259]: E0508 00:40:57.926341 2259 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8af932a1-2652-43b6-80a3-ba0182b9cf24" containerName="calico-node" May 8 00:40:57.926697 kubelet[2259]: I0508 00:40:57.926676 2259 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af932a1-2652-43b6-80a3-ba0182b9cf24" containerName="calico-node" May 8 00:40:57.926697 kubelet[2259]: I0508 00:40:57.926690 2259 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af932a1-2652-43b6-80a3-ba0182b9cf24" containerName="calico-node" May 8 00:40:57.926788 kubelet[2259]: E0508 00:40:57.926776 2259 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8af932a1-2652-43b6-80a3-ba0182b9cf24" containerName="calico-node" May 8 00:40:57.927119 kubelet[2259]: I0508 00:40:57.927099 2259 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af932a1-2652-43b6-80a3-ba0182b9cf24" containerName="calico-node" May 8 00:40:57.955327 kubelet[2259]: I0508 00:40:57.955281 2259 scope.go:117] "RemoveContainer" containerID="29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52" May 8 00:40:57.961170 env[1314]: time="2025-05-08T00:40:57.961123921Z" level=info msg="RemoveContainer for \"29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52\"" May 8 00:40:57.969374 env[1314]: time="2025-05-08T00:40:57.969236492Z" level=info msg="RemoveContainer for \"29c199d3afc1c4ef4f740cc27356a3371144c6811d240bcbc4024c463b5cbc52\" returns successfully" May 8 00:40:57.969574 kubelet[2259]: I0508 00:40:57.969525 2259 scope.go:117] "RemoveContainer" containerID="89534fd92ac31e82907bb5ecfb72d84d7bbb26a7ed2a0bb1e61e552122a69321" May 8 00:40:57.972486 env[1314]: time="2025-05-08T00:40:57.972190614Z" level=info msg="RemoveContainer for \"89534fd92ac31e82907bb5ecfb72d84d7bbb26a7ed2a0bb1e61e552122a69321\"" May 8 00:40:57.975445 env[1314]: time="2025-05-08T00:40:57.975404111Z" level=info msg="RemoveContainer for \"89534fd92ac31e82907bb5ecfb72d84d7bbb26a7ed2a0bb1e61e552122a69321\" returns successfully" May 8 00:40:57.975654 kubelet[2259]: I0508 00:40:57.975621 2259 scope.go:117] "RemoveContainer" containerID="617f3739a38e4624f40ba2d60fb1e4038a586d1d1f4555f3db6f387b2a1a43dd" May 8 00:40:57.976705 env[1314]: time="2025-05-08T00:40:57.976527960Z" level=info msg="RemoveContainer for \"617f3739a38e4624f40ba2d60fb1e4038a586d1d1f4555f3db6f387b2a1a43dd\"" May 8 00:40:57.978128 kubelet[2259]: I0508 00:40:57.978102 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/045c5203-2b23-4a9c-9e1a-cbe5fb002068-var-lib-calico\") pod \"calico-node-7pk8d\" (UID: \"045c5203-2b23-4a9c-9e1a-cbe5fb002068\") " pod="calico-system/calico-node-7pk8d" May 8 00:40:57.978201 kubelet[2259]: I0508 00:40:57.978141 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/045c5203-2b23-4a9c-9e1a-cbe5fb002068-xtables-lock\") pod \"calico-node-7pk8d\" (UID: \"045c5203-2b23-4a9c-9e1a-cbe5fb002068\") " pod="calico-system/calico-node-7pk8d" May 8 00:40:57.978240 kubelet[2259]: I0508 00:40:57.978210 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/045c5203-2b23-4a9c-9e1a-cbe5fb002068-policysync\") pod \"calico-node-7pk8d\" (UID: \"045c5203-2b23-4a9c-9e1a-cbe5fb002068\") " pod="calico-system/calico-node-7pk8d" May 8 00:40:57.978303 kubelet[2259]: I0508 00:40:57.978287 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/045c5203-2b23-4a9c-9e1a-cbe5fb002068-var-run-calico\") pod \"calico-node-7pk8d\" (UID: \"045c5203-2b23-4a9c-9e1a-cbe5fb002068\") " pod="calico-system/calico-node-7pk8d" May 8 00:40:57.978364 kubelet[2259]: I0508 00:40:57.978341 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/045c5203-2b23-4a9c-9e1a-cbe5fb002068-flexvol-driver-host\") pod \"calico-node-7pk8d\" (UID: \"045c5203-2b23-4a9c-9e1a-cbe5fb002068\") " pod="calico-system/calico-node-7pk8d" May 8 00:40:57.978364 kubelet[2259]: I0508 00:40:57.978361 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/045c5203-2b23-4a9c-9e1a-cbe5fb002068-lib-modules\") pod \"calico-node-7pk8d\" (UID: \"045c5203-2b23-4a9c-9e1a-cbe5fb002068\") " pod="calico-system/calico-node-7pk8d" May 8 00:40:57.978419 kubelet[2259]: I0508 00:40:57.978374 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/045c5203-2b23-4a9c-9e1a-cbe5fb002068-cni-bin-dir\") pod \"calico-node-7pk8d\" (UID: \"045c5203-2b23-4a9c-9e1a-cbe5fb002068\") " pod="calico-system/calico-node-7pk8d" May 8 00:40:57.978448 kubelet[2259]: I0508 00:40:57.978422 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/045c5203-2b23-4a9c-9e1a-cbe5fb002068-cni-net-dir\") pod \"calico-node-7pk8d\" (UID: \"045c5203-2b23-4a9c-9e1a-cbe5fb002068\") " pod="calico-system/calico-node-7pk8d" May 8 00:40:57.978448 kubelet[2259]: I0508 00:40:57.978437 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/045c5203-2b23-4a9c-9e1a-cbe5fb002068-cni-log-dir\") pod \"calico-node-7pk8d\" (UID: \"045c5203-2b23-4a9c-9e1a-cbe5fb002068\") " pod="calico-system/calico-node-7pk8d" May 8 00:40:57.978497 kubelet[2259]: I0508 00:40:57.978451 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/045c5203-2b23-4a9c-9e1a-cbe5fb002068-node-certs\") pod \"calico-node-7pk8d\" (UID: \"045c5203-2b23-4a9c-9e1a-cbe5fb002068\") " pod="calico-system/calico-node-7pk8d" May 8 00:40:57.978523 kubelet[2259]: I0508 00:40:57.978503 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brfq9\" (UniqueName: \"kubernetes.io/projected/045c5203-2b23-4a9c-9e1a-cbe5fb002068-kube-api-access-brfq9\") pod \"calico-node-7pk8d\" (UID: \"045c5203-2b23-4a9c-9e1a-cbe5fb002068\") " pod="calico-system/calico-node-7pk8d" May 8 00:40:57.979369 env[1314]: time="2025-05-08T00:40:57.979349330Z" level=info msg="RemoveContainer for \"617f3739a38e4624f40ba2d60fb1e4038a586d1d1f4555f3db6f387b2a1a43dd\" returns successfully" May 8 00:40:57.979475 kubelet[2259]: I0508 00:40:57.979456 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/045c5203-2b23-4a9c-9e1a-cbe5fb002068-tigera-ca-bundle\") pod \"calico-node-7pk8d\" (UID: \"045c5203-2b23-4a9c-9e1a-cbe5fb002068\") " pod="calico-system/calico-node-7pk8d" May 8 00:40:57.979540 kubelet[2259]: I0508 00:40:57.979519 2259 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8af932a1-2652-43b6-80a3-ba0182b9cf24-node-certs\") on node \"localhost\" DevicePath \"\"" May 8 00:40:57.979540 kubelet[2259]: I0508 00:40:57.979530 2259 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:40:57.979540 kubelet[2259]: I0508 00:40:57.979538 2259 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-policysync\") on node \"localhost\" DevicePath \"\"" May 8 00:40:57.979617 kubelet[2259]: I0508 00:40:57.979545 2259 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8af932a1-2652-43b6-80a3-ba0182b9cf24-cni-log-dir\") on node \"localhost\" DevicePath \"\"" May 8 00:40:57.979617 kubelet[2259]: I0508 00:40:57.979552 2259 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-7fxst\" (UniqueName: \"kubernetes.io/projected/8af932a1-2652-43b6-80a3-ba0182b9cf24-kube-api-access-7fxst\") on node \"localhost\" DevicePath \"\"" May 8 00:40:57.979617 kubelet[2259]: I0508 00:40:57.979560 2259 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8af932a1-2652-43b6-80a3-ba0182b9cf24-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 8 00:40:58.232296 kubelet[2259]: E0508 00:40:58.232233 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:58.233269 env[1314]: time="2025-05-08T00:40:58.232929296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7pk8d,Uid:045c5203-2b23-4a9c-9e1a-cbe5fb002068,Namespace:calico-system,Attempt:0,}" May 8 00:40:58.581502 env[1314]: time="2025-05-08T00:40:58.581362070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:58.581502 env[1314]: time="2025-05-08T00:40:58.581396866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:58.581502 env[1314]: time="2025-05-08T00:40:58.581407346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:58.581942 env[1314]: time="2025-05-08T00:40:58.581884023Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e4320ca52414eb0640029168d6a177b0b57668e4fe865139d6c33d9ef809b556 pid=4220 runtime=io.containerd.runc.v2 May 8 00:40:58.612310 env[1314]: time="2025-05-08T00:40:58.612217938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7pk8d,Uid:045c5203-2b23-4a9c-9e1a-cbe5fb002068,Namespace:calico-system,Attempt:0,} returns sandbox id \"e4320ca52414eb0640029168d6a177b0b57668e4fe865139d6c33d9ef809b556\"" May 8 00:40:58.613077 kubelet[2259]: E0508 00:40:58.613045 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:58.615709 env[1314]: time="2025-05-08T00:40:58.615661070Z" level=info msg="CreateContainer within sandbox \"e4320ca52414eb0640029168d6a177b0b57668e4fe865139d6c33d9ef809b556\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:40:59.071267 env[1314]: time="2025-05-08T00:40:59.071199131Z" level=info msg="CreateContainer within sandbox \"e4320ca52414eb0640029168d6a177b0b57668e4fe865139d6c33d9ef809b556\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f336940f96b887f431da677e8d3c2b72dabc23987d03bf21525dab6baaa394a7\"" May 8 00:40:59.072137 env[1314]: time="2025-05-08T00:40:59.072095868Z" level=info msg="StartContainer for \"f336940f96b887f431da677e8d3c2b72dabc23987d03bf21525dab6baaa394a7\"" May 8 00:40:59.099921 systemd[1]: run-containerd-runc-k8s.io-f336940f96b887f431da677e8d3c2b72dabc23987d03bf21525dab6baaa394a7-runc.zYknqP.mount: Deactivated successfully. May 8 00:40:59.147111 env[1314]: time="2025-05-08T00:40:59.147044991Z" level=info msg="StartContainer for \"f336940f96b887f431da677e8d3c2b72dabc23987d03bf21525dab6baaa394a7\" returns successfully" May 8 00:40:59.268270 env[1314]: time="2025-05-08T00:40:59.268177067Z" level=info msg="shim disconnected" id=f336940f96b887f431da677e8d3c2b72dabc23987d03bf21525dab6baaa394a7 May 8 00:40:59.268270 env[1314]: time="2025-05-08T00:40:59.268261437Z" level=warning msg="cleaning up after shim disconnected" id=f336940f96b887f431da677e8d3c2b72dabc23987d03bf21525dab6baaa394a7 namespace=k8s.io May 8 00:40:59.268270 env[1314]: time="2025-05-08T00:40:59.268278089Z" level=info msg="cleaning up dead shim" May 8 00:40:59.275872 env[1314]: time="2025-05-08T00:40:59.275806513Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:40:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4299 runtime=io.containerd.runc.v2\n" May 8 00:40:59.412554 kubelet[2259]: I0508 00:40:59.412445 2259 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8af932a1-2652-43b6-80a3-ba0182b9cf24" path="/var/lib/kubelet/pods/8af932a1-2652-43b6-80a3-ba0182b9cf24/volumes" May 8 00:40:59.778684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f336940f96b887f431da677e8d3c2b72dabc23987d03bf21525dab6baaa394a7-rootfs.mount: Deactivated successfully. May 8 00:40:59.961860 kubelet[2259]: E0508 00:40:59.961791 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:59.963547 env[1314]: time="2025-05-08T00:40:59.963501937Z" level=info msg="CreateContainer within sandbox \"e4320ca52414eb0640029168d6a177b0b57668e4fe865139d6c33d9ef809b556\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:41:00.200921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1631309957.mount: Deactivated successfully. May 8 00:41:00.213480 env[1314]: time="2025-05-08T00:41:00.213419435Z" level=info msg="CreateContainer within sandbox \"e4320ca52414eb0640029168d6a177b0b57668e4fe865139d6c33d9ef809b556\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"062494e391d69e72f8a03533aec6c29522e1df8566be6a508c62b35a4ab7533c\"" May 8 00:41:00.214353 env[1314]: time="2025-05-08T00:41:00.214283590Z" level=info msg="StartContainer for \"062494e391d69e72f8a03533aec6c29522e1df8566be6a508c62b35a4ab7533c\"" May 8 00:41:00.388605 env[1314]: time="2025-05-08T00:41:00.388531434Z" level=info msg="StartContainer for \"062494e391d69e72f8a03533aec6c29522e1df8566be6a508c62b35a4ab7533c\" returns successfully" May 8 00:41:00.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.16:22-10.0.0.1:35782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:00.491231 systemd[1]: Started sshd@20-10.0.0.16:22-10.0.0.1:35782.service. May 8 00:41:00.492440 kernel: kauditd_printk_skb: 1 callbacks suppressed May 8 00:41:00.492512 kernel: audit: type=1130 audit(1746664860.490:385): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.16:22-10.0.0.1:35782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:00.525000 audit[4351]: USER_ACCT pid=4351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:00.526557 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 35782 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:41:00.529000 audit[4351]: CRED_ACQ pid=4351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:00.531126 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:41:00.534634 kernel: audit: type=1101 audit(1746664860.525:386): pid=4351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:00.534716 kernel: audit: type=1103 audit(1746664860.529:387): pid=4351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:00.534765 kernel: audit: type=1006 audit(1746664860.529:388): pid=4351 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 May 8 00:41:00.537131 kernel: audit: type=1300 audit(1746664860.529:388): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa8555210 a2=3 a3=0 items=0 ppid=1 pid=4351 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:00.529000 audit[4351]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa8555210 a2=3 a3=0 items=0 ppid=1 pid=4351 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:00.538202 systemd-logind[1294]: New session 21 of user core. May 8 00:41:00.538970 systemd[1]: Started session-21.scope. May 8 00:41:00.541113 kernel: audit: type=1327 audit(1746664860.529:388): proctitle=737368643A20636F7265205B707269765D May 8 00:41:00.529000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:41:00.542000 audit[4351]: USER_START pid=4351 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:00.543000 audit[4354]: CRED_ACQ pid=4354 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:00.552619 kernel: audit: type=1105 audit(1746664860.542:389): pid=4351 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:00.552698 kernel: audit: type=1103 audit(1746664860.543:390): pid=4354 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:00.714443 sshd[4351]: pam_unix(sshd:session): session closed for user core May 8 00:41:00.715000 audit[4351]: USER_END pid=4351 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:00.717266 systemd[1]: sshd@20-10.0.0.16:22-10.0.0.1:35782.service: Deactivated successfully. May 8 00:41:00.718500 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:41:00.719099 systemd-logind[1294]: Session 21 logged out. Waiting for processes to exit. May 8 00:41:00.720181 systemd-logind[1294]: Removed session 21. May 8 00:41:00.715000 audit[4351]: CRED_DISP pid=4351 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:00.724974 kernel: audit: type=1106 audit(1746664860.715:391): pid=4351 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:00.725048 kernel: audit: type=1104 audit(1746664860.715:392): pid=4351 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:00.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.16:22-10.0.0.1:35782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:00.965873 kubelet[2259]: E0508 00:41:00.965749 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:01.410926 env[1314]: time="2025-05-08T00:41:01.410769168Z" level=info msg="StopPodSandbox for \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\"" May 8 00:41:01.411312 kubelet[2259]: E0508 00:41:01.410993 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:01.438470 env[1314]: time="2025-05-08T00:41:01.438402148Z" level=error msg="StopPodSandbox for \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\" failed" error="failed to destroy network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:41:01.438731 kubelet[2259]: E0508 00:41:01.438679 2259 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:41:01.438785 kubelet[2259]: E0508 00:41:01.438751 2259 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195"} May 8 00:41:01.438813 kubelet[2259]: E0508 00:41:01.438791 2259 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7ae0688-a473-448c-b8b9-7f2261bb0d9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:41:01.438895 kubelet[2259]: E0508 00:41:01.438815 2259 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7ae0688-a473-448c-b8b9-7f2261bb0d9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655fb5665b-b5526" podUID="d7ae0688-a473-448c-b8b9-7f2261bb0d9a" May 8 00:41:01.967691 kubelet[2259]: E0508 00:41:01.967633 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:02.174768 env[1314]: time="2025-05-08T00:41:02.174677400Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" May 8 00:41:02.195143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-062494e391d69e72f8a03533aec6c29522e1df8566be6a508c62b35a4ab7533c-rootfs.mount: Deactivated successfully. May 8 00:41:02.203060 env[1314]: time="2025-05-08T00:41:02.202984695Z" level=info msg="shim disconnected" id=062494e391d69e72f8a03533aec6c29522e1df8566be6a508c62b35a4ab7533c May 8 00:41:02.203060 env[1314]: time="2025-05-08T00:41:02.203047274Z" level=warning msg="cleaning up after shim disconnected" id=062494e391d69e72f8a03533aec6c29522e1df8566be6a508c62b35a4ab7533c namespace=k8s.io May 8 00:41:02.203060 env[1314]: time="2025-05-08T00:41:02.203062513Z" level=info msg="cleaning up dead shim" May 8 00:41:02.209973 env[1314]: time="2025-05-08T00:41:02.209915636Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:41:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4407 runtime=io.containerd.runc.v2\n" May 8 00:41:02.971338 kubelet[2259]: E0508 00:41:02.971297 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:02.982141 env[1314]: time="2025-05-08T00:41:02.982089101Z" level=info msg="CreateContainer within sandbox \"e4320ca52414eb0640029168d6a177b0b57668e4fe865139d6c33d9ef809b556\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:41:03.554099 env[1314]: time="2025-05-08T00:41:03.554005724Z" level=info msg="CreateContainer within sandbox \"e4320ca52414eb0640029168d6a177b0b57668e4fe865139d6c33d9ef809b556\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"eba0b34fc0549c5629766d6b3df0e3b437e7197b81df93ef78a7ec66405315cf\"" May 8 00:41:03.554862 env[1314]: time="2025-05-08T00:41:03.554674917Z" level=info msg="StartContainer for \"eba0b34fc0549c5629766d6b3df0e3b437e7197b81df93ef78a7ec66405315cf\"" May 8 00:41:03.751272 env[1314]: time="2025-05-08T00:41:03.751169348Z" level=info msg="StartContainer for \"eba0b34fc0549c5629766d6b3df0e3b437e7197b81df93ef78a7ec66405315cf\" returns successfully" May 8 00:41:03.976316 kubelet[2259]: E0508 00:41:03.976220 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:04.124435 kubelet[2259]: I0508 00:41:04.124373 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7pk8d" podStartSLOduration=7.124349901 podStartE2EDuration="7.124349901s" podCreationTimestamp="2025-05-08 00:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:41:04.124157225 +0000 UTC m=+88.810567967" watchObservedRunningTime="2025-05-08 00:41:04.124349901 +0000 UTC m=+88.810760643" May 8 00:41:04.978147 kubelet[2259]: E0508 00:41:04.978108 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:05.411906 env[1314]: time="2025-05-08T00:41:05.411737528Z" level=info msg="StopPodSandbox for \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\"" May 8 00:41:05.564000 audit[4595]: AVC avc: denied { write } for pid=4595 comm="tee" name="fd" dev="proc" ino=28015 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 8 00:41:05.567478 kernel: kauditd_printk_skb: 1 callbacks suppressed May 8 00:41:05.567549 kernel: audit: type=1400 audit(1746664865.564:394): avc: denied { write } for pid=4595 comm="tee" name="fd" dev="proc" ino=28015 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 8 00:41:05.564000 audit[4571]: AVC avc: denied { write } for pid=4571 comm="tee" name="fd" dev="proc" ino=27377 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 8 00:41:05.575925 kernel: audit: type=1400 audit(1746664865.564:395): avc: denied { write } for pid=4571 comm="tee" name="fd" dev="proc" ino=27377 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 8 00:41:05.577114 kernel: audit: type=1300 audit(1746664865.564:395): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcdd020a2a a2=241 a3=1b6 items=1 ppid=4554 pid=4571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:05.564000 audit[4571]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcdd020a2a a2=241 a3=1b6 items=1 ppid=4554 pid=4571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:05.564000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 8 00:41:05.586896 kernel: audit: type=1307 audit(1746664865.564:395): cwd="/etc/service/enabled/bird6/log" May 8 00:41:05.564000 audit: PATH item=0 name="/dev/fd/63" inode=27360 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:41:05.595894 kernel: audit: type=1302 audit(1746664865.564:395): item=0 name="/dev/fd/63" inode=27360 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:41:05.564000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 8 00:41:05.564000 audit[4595]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc7126ea2b a2=241 a3=1b6 items=1 ppid=4561 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:05.681770 kernel: audit: type=1327 audit(1746664865.564:395): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 8 00:41:05.681991 kernel: audit: type=1300 audit(1746664865.564:394): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc7126ea2b a2=241 a3=1b6 items=1 ppid=4561 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:05.564000 audit: CWD cwd="/etc/service/enabled/bird/log" May 8 00:41:05.564000 audit: PATH item=0 name="/dev/fd/63" inode=26534 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:41:05.777964 kernel: audit: type=1307 audit(1746664865.564:394): cwd="/etc/service/enabled/bird/log" May 8 00:41:05.778121 kernel: audit: type=1302 audit(1746664865.564:394): item=0 name="/dev/fd/63" inode=26534 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:41:05.778165 kernel: audit: type=1327 audit(1746664865.564:394): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 8 00:41:05.564000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 8 00:41:05.569000 audit[4588]: AVC avc: denied { write } for pid=4588 comm="tee" name="fd" dev="proc" ino=26540 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 8 00:41:05.569000 audit[4588]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd4cfa8a2a a2=241 a3=1b6 items=1 ppid=4567 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:05.569000 audit: CWD cwd="/etc/service/enabled/confd/log" May 8 00:41:05.569000 audit: PATH item=0 name="/dev/fd/63" inode=28673 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:41:05.569000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 8 00:41:05.574000 audit[4610]: AVC avc: denied { write } for pid=4610 comm="tee" name="fd" dev="proc" ino=26544 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 8 00:41:05.574000 audit[4610]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe422a4a1b a2=241 a3=1b6 items=1 ppid=4565 pid=4610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:05.574000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 8 00:41:05.574000 audit: PATH item=0 name="/dev/fd/63" inode=28682 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:41:05.574000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 8 00:41:05.609000 audit[4632]: AVC avc: denied { write } for pid=4632 comm="tee" name="fd" dev="proc" ino=28035 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 8 00:41:05.609000 audit[4632]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff4c174a2a a2=241 a3=1b6 items=1 ppid=4562 pid=4632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:05.609000 audit: CWD cwd="/etc/service/enabled/felix/log" May 8 00:41:05.609000 audit: PATH item=0 name="/dev/fd/63" inode=28032 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:41:05.609000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 8 00:41:05.681000 audit[4641]: AVC avc: denied { write } for pid=4641 comm="tee" name="fd" dev="proc" ino=26550 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 8 00:41:05.681000 audit[4641]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd6686ba1a a2=241 a3=1b6 items=1 ppid=4557 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:05.681000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 8 00:41:05.681000 audit: PATH item=0 name="/dev/fd/63" inode=27386 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:41:05.681000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 8 00:41:05.690000 audit[4643]: AVC avc: denied { write } for pid=4643 comm="tee" name="fd" dev="proc" ino=28709 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 8 00:41:05.690000 audit[4643]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffd43eaa2c a2=241 a3=1b6 items=1 ppid=4572 pid=4643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:05.690000 audit: CWD cwd="/etc/service/enabled/cni/log" May 8 00:41:05.690000 audit: PATH item=0 name="/dev/fd/63" inode=28695 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:41:05.690000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 8 00:41:05.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.16:22-10.0.0.1:48140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:05.781256 systemd[1]: Started sshd@21-10.0.0.16:22-10.0.0.1:48140.service. May 8 00:41:05.815000 audit[4650]: USER_ACCT pid=4650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:05.817917 sshd[4650]: Accepted publickey for core from 10.0.0.1 port 48140 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:41:05.817000 audit[4650]: CRED_ACQ pid=4650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:05.817000 audit[4650]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe963c9f70 a2=3 a3=0 items=0 ppid=1 pid=4650 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:05.817000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:41:05.819385 sshd[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:41:05.823050 systemd-logind[1294]: New session 22 of user core. May 8 00:41:05.823874 systemd[1]: Started session-22.scope. May 8 00:41:05.826000 audit[4650]: USER_START pid=4650 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:05.827000 audit[4653]: CRED_ACQ pid=4653 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:06.138380 sshd[4650]: pam_unix(sshd:session): session closed for user core May 8 00:41:06.137000 audit[4650]: USER_END pid=4650 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:06.138000 audit[4650]: CRED_DISP pid=4650 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:06.141237 systemd[1]: sshd@21-10.0.0.16:22-10.0.0.1:48140.service: Deactivated successfully. May 8 00:41:06.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.16:22-10.0.0.1:48140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:06.142267 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:41:06.142277 systemd-logind[1294]: Session 22 logged out. Waiting for processes to exit. May 8 00:41:06.143048 systemd-logind[1294]: Removed session 22. May 8 00:41:06.231399 env[1314]: 2025-05-08 00:41:05.576 [INFO][4539] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" May 8 00:41:06.231399 env[1314]: 2025-05-08 00:41:05.576 [INFO][4539] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" iface="eth0" netns="/var/run/netns/cni-00c1f312-b4fa-f1e8-1656-e4aba5bb077e" May 8 00:41:06.231399 env[1314]: 2025-05-08 00:41:05.577 [INFO][4539] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" iface="eth0" netns="/var/run/netns/cni-00c1f312-b4fa-f1e8-1656-e4aba5bb077e" May 8 00:41:06.231399 env[1314]: 2025-05-08 00:41:05.577 [INFO][4539] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" iface="eth0" netns="/var/run/netns/cni-00c1f312-b4fa-f1e8-1656-e4aba5bb077e" May 8 00:41:06.231399 env[1314]: 2025-05-08 00:41:05.577 [INFO][4539] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" May 8 00:41:06.231399 env[1314]: 2025-05-08 00:41:05.577 [INFO][4539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" May 8 00:41:06.231399 env[1314]: 2025-05-08 00:41:05.650 [INFO][4618] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" HandleID="k8s-pod-network.0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" Workload="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" May 8 00:41:06.231399 env[1314]: 2025-05-08 00:41:05.650 [INFO][4618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:06.231399 env[1314]: 2025-05-08 00:41:05.650 [INFO][4618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:06.231399 env[1314]: 2025-05-08 00:41:06.132 [WARNING][4618] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" HandleID="k8s-pod-network.0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" Workload="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" May 8 00:41:06.231399 env[1314]: 2025-05-08 00:41:06.132 [INFO][4618] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" HandleID="k8s-pod-network.0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" Workload="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" May 8 00:41:06.231399 env[1314]: 2025-05-08 00:41:06.219 [INFO][4618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:06.231399 env[1314]: 2025-05-08 00:41:06.226 [INFO][4539] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" May 8 00:41:06.233799 systemd[1]: run-netns-cni\x2d00c1f312\x2db4fa\x2df1e8\x2d1656\x2de4aba5bb077e.mount: Deactivated successfully. May 8 00:41:06.235093 env[1314]: time="2025-05-08T00:41:06.235035050Z" level=info msg="TearDown network for sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\" successfully" May 8 00:41:06.235093 env[1314]: time="2025-05-08T00:41:06.235074335Z" level=info msg="StopPodSandbox for \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\" returns successfully" May 8 00:41:06.235486 kubelet[2259]: E0508 00:41:06.235452 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:06.235896 env[1314]: time="2025-05-08T00:41:06.235854477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xrfkq,Uid:35415a0b-9f3d-4f12-b555-b4c08d155deb,Namespace:kube-system,Attempt:1,}" May 8 00:41:06.423000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.423000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.423000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.423000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.423000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.423000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.423000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.423000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.423000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.423000 audit: BPF prog-id=10 op=LOAD May 8 00:41:06.423000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd1dd6cb50 a2=98 a3=3 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.423000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.424000 audit: BPF prog-id=10 op=UNLOAD May 8 00:41:06.425000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit: BPF prog-id=11 op=LOAD May 8 00:41:06.425000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd1dd6c930 a2=74 a3=540051 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.425000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.425000 audit: BPF prog-id=11 op=UNLOAD May 8 00:41:06.425000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.425000 audit: BPF prog-id=12 op=LOAD May 8 00:41:06.425000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd1dd6c960 a2=94 a3=2 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.425000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.425000 audit: BPF prog-id=12 op=UNLOAD May 8 00:41:06.525918 systemd-networkd[1082]: cali758d0da9acf: Link UP May 8 00:41:06.528329 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 00:41:06.528389 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali758d0da9acf: link becomes ready May 8 00:41:06.530002 systemd-networkd[1082]: cali758d0da9acf: Gained carrier May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.430 [INFO][4689] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0 coredns-7db6d8ff4d- kube-system 35415a0b-9f3d-4f12-b555-b4c08d155deb 1098 0 2025-05-08 00:39:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-xrfkq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali758d0da9acf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xrfkq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xrfkq-" May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.430 [INFO][4689] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xrfkq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.467 [INFO][4710] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" HandleID="k8s-pod-network.bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" Workload="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.477 [INFO][4710] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" HandleID="k8s-pod-network.bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" Workload="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050b20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-xrfkq", "timestamp":"2025-05-08 00:41:06.467371277 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.478 [INFO][4710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.478 [INFO][4710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.478 [INFO][4710] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.480 [INFO][4710] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" host="localhost" May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.485 [INFO][4710] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.490 [INFO][4710] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.492 [INFO][4710] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.494 [INFO][4710] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.494 [INFO][4710] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" host="localhost" May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.496 [INFO][4710] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65 May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.505 [INFO][4710] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" host="localhost" May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.512 [INFO][4710] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" host="localhost" May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.512 [INFO][4710] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" host="localhost" May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.512 [INFO][4710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:06.543983 env[1314]: 2025-05-08 00:41:06.512 [INFO][4710] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" HandleID="k8s-pod-network.bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" Workload="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" May 8 00:41:06.544747 env[1314]: 2025-05-08 00:41:06.515 [INFO][4689] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xrfkq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"35415a0b-9f3d-4f12-b555-b4c08d155deb", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-xrfkq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali758d0da9acf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:06.544747 env[1314]: 2025-05-08 00:41:06.515 [INFO][4689] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xrfkq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" May 8 00:41:06.544747 env[1314]: 2025-05-08 00:41:06.515 [INFO][4689] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali758d0da9acf ContainerID="bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xrfkq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" May 8 00:41:06.544747 env[1314]: 2025-05-08 00:41:06.530 [INFO][4689] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xrfkq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" May 8 00:41:06.544747 env[1314]: 2025-05-08 00:41:06.530 [INFO][4689] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xrfkq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"35415a0b-9f3d-4f12-b555-b4c08d155deb", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65", Pod:"coredns-7db6d8ff4d-xrfkq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali758d0da9acf", MAC:"52:9b:50:5a:48:23", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:06.544747 env[1314]: 2025-05-08 00:41:06.541 [INFO][4689] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xrfkq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" May 8 00:41:06.553000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.553000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.553000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.553000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.553000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.553000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.553000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.553000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.553000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.553000 audit: BPF prog-id=13 op=LOAD May 8 00:41:06.553000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd1dd6c820 a2=40 a3=1 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.553000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.554000 audit: BPF prog-id=13 op=UNLOAD May 8 00:41:06.554000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.554000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd1dd6c8f0 a2=50 a3=7ffd1dd6c9d0 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.554000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.558291 env[1314]: time="2025-05-08T00:41:06.558217686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:41:06.558383 env[1314]: time="2025-05-08T00:41:06.558304661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:41:06.558383 env[1314]: time="2025-05-08T00:41:06.558327954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:41:06.558610 env[1314]: time="2025-05-08T00:41:06.558572951Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65 pid=4741 runtime=io.containerd.runc.v2 May 8 00:41:06.563000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.563000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd1dd6c830 a2=28 a3=0 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.563000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.563000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.563000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd1dd6c860 a2=28 a3=0 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.563000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.563000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.563000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd1dd6c770 a2=28 a3=0 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.563000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.563000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.563000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd1dd6c880 a2=28 a3=0 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.563000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.563000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.563000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd1dd6c860 a2=28 a3=0 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.563000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.563000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.563000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd1dd6c850 a2=28 a3=0 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.563000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.563000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.563000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd1dd6c880 a2=28 a3=0 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.563000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.563000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.563000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd1dd6c860 a2=28 a3=0 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.563000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.563000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.563000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd1dd6c880 a2=28 a3=0 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.563000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.563000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.563000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd1dd6c850 a2=28 a3=0 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.563000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.563000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.563000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd1dd6c8c0 a2=28 a3=0 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.563000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd1dd6c670 a2=50 a3=1 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.564000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.564000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit: BPF prog-id=14 op=LOAD May 8 00:41:06.564000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd1dd6c670 a2=94 a3=5 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.564000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.564000 audit: BPF prog-id=14 op=UNLOAD May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd1dd6c720 a2=50 a3=1 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.564000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.564000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffd1dd6c840 a2=4 a3=38 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.564000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.564000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { confidentiality } for pid=4708 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 8 00:41:06.564000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd1dd6c890 a2=94 a3=6 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.564000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.564000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { confidentiality } for pid=4708 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 8 00:41:06.564000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd1dd6c040 a2=94 a3=83 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.564000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.564000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { perfmon } for pid=4708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { bpf } for pid=4708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.564000 audit[4708]: AVC avc: denied { confidentiality } for pid=4708 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 8 00:41:06.564000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd1dd6c040 a2=94 a3=83 items=0 ppid=4564 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.564000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 8 00:41:06.577000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.577000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.577000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.577000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.577000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.577000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.577000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.577000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.577000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.577000 audit: BPF prog-id=15 op=LOAD May 8 00:41:06.577000 audit[4765]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcbe54fdf0 a2=98 a3=1999999999999999 items=0 ppid=4564 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.577000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 8 00:41:06.578000 audit: BPF prog-id=15 op=UNLOAD May 8 00:41:06.578000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.578000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.578000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.578000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.578000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.578000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.578000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.578000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.578000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.578000 audit: BPF prog-id=16 op=LOAD May 8 00:41:06.578000 audit[4765]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcbe54fcd0 a2=74 a3=ffff items=0 ppid=4564 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.578000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 8 00:41:06.579000 audit: BPF prog-id=16 op=UNLOAD May 8 00:41:06.579000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.579000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.579000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.579000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.579000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.579000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.579000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.579000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.579000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.579000 audit: BPF prog-id=17 op=LOAD May 8 00:41:06.579000 audit[4765]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcbe54fd10 a2=40 a3=7ffcbe54fef0 items=0 ppid=4564 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.579000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 8 00:41:06.580000 audit: BPF prog-id=17 op=UNLOAD May 8 00:41:06.594989 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:41:06.625133 env[1314]: time="2025-05-08T00:41:06.625086537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xrfkq,Uid:35415a0b-9f3d-4f12-b555-b4c08d155deb,Namespace:kube-system,Attempt:1,} returns sandbox id \"bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65\"" May 8 00:41:06.626942 kubelet[2259]: E0508 00:41:06.626619 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:06.630619 env[1314]: time="2025-05-08T00:41:06.630583876Z" level=info msg="CreateContainer within sandbox \"bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:41:06.650902 systemd-networkd[1082]: vxlan.calico: Link UP May 8 00:41:06.650908 systemd-networkd[1082]: vxlan.calico: Gained carrier May 8 00:41:06.666000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit: BPF prog-id=18 op=LOAD May 8 00:41:06.666000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffca567db50 a2=98 a3=ffffffff items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.666000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.666000 audit: BPF prog-id=18 op=UNLOAD May 8 00:41:06.666000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.666000 audit: BPF prog-id=19 op=LOAD May 8 00:41:06.666000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffca567d960 a2=74 a3=540051 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.666000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.667000 audit: BPF prog-id=19 op=UNLOAD May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit: BPF prog-id=20 op=LOAD May 8 00:41:06.667000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffca567d990 a2=94 a3=2 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.667000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.667000 audit: BPF prog-id=20 op=UNLOAD May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffca567d860 a2=28 a3=0 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.667000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffca567d890 a2=28 a3=0 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.667000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffca567d7a0 a2=28 a3=0 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.667000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffca567d8b0 a2=28 a3=0 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.667000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffca567d890 a2=28 a3=0 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.667000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffca567d880 a2=28 a3=0 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.667000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffca567d8b0 a2=28 a3=0 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.667000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffca567d890 a2=28 a3=0 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.667000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffca567d8b0 a2=28 a3=0 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.667000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffca567d880 a2=28 a3=0 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.667000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffca567d8f0 a2=28 a3=0 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.667000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit: BPF prog-id=21 op=LOAD May 8 00:41:06.667000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffca567d760 a2=40 a3=0 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.667000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.667000 audit: BPF prog-id=21 op=UNLOAD May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffca567d750 a2=50 a3=2800 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.667000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffca567d750 a2=50 a3=2800 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.667000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit: BPF prog-id=22 op=LOAD May 8 00:41:06.667000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffca567cf70 a2=94 a3=2 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.667000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.667000 audit: BPF prog-id=22 op=UNLOAD May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { perfmon } for pid=4804 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit[4804]: AVC avc: denied { bpf } for pid=4804 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.667000 audit: BPF prog-id=23 op=LOAD May 8 00:41:06.667000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffca567d070 a2=94 a3=30 items=0 ppid=4564 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.667000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit: BPF prog-id=24 op=LOAD May 8 00:41:06.672000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe36413e60 a2=98 a3=0 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.672000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.672000 audit: BPF prog-id=24 op=UNLOAD May 8 00:41:06.672000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit: BPF prog-id=25 op=LOAD May 8 00:41:06.672000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe36413c40 a2=74 a3=540051 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.672000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.672000 audit: BPF prog-id=25 op=UNLOAD May 8 00:41:06.672000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.672000 audit: BPF prog-id=26 op=LOAD May 8 00:41:06.672000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe36413c70 a2=94 a3=2 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.672000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.672000 audit: BPF prog-id=26 op=UNLOAD May 8 00:41:06.677949 env[1314]: time="2025-05-08T00:41:06.677852883Z" level=info msg="CreateContainer within sandbox \"bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"484645ccc36bf6221d2da1ba4ff7a62c7a6016875d8d7e605c2949a0f623a7bb\"" May 8 00:41:06.685268 env[1314]: time="2025-05-08T00:41:06.684099196Z" level=info msg="StartContainer for \"484645ccc36bf6221d2da1ba4ff7a62c7a6016875d8d7e605c2949a0f623a7bb\"" May 8 00:41:06.746670 env[1314]: time="2025-05-08T00:41:06.746606007Z" level=info msg="StartContainer for \"484645ccc36bf6221d2da1ba4ff7a62c7a6016875d8d7e605c2949a0f623a7bb\" returns successfully" May 8 00:41:06.794000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.794000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.794000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.794000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.794000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.794000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.794000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.794000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.794000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.794000 audit: BPF prog-id=27 op=LOAD May 8 00:41:06.794000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe36413b30 a2=40 a3=1 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.794000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.794000 audit: BPF prog-id=27 op=UNLOAD May 8 00:41:06.794000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.794000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe36413c00 a2=50 a3=7ffe36413ce0 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.794000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.802000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.802000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe36413b40 a2=28 a3=0 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.802000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.802000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.802000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe36413b70 a2=28 a3=0 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.802000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.802000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.802000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe36413a80 a2=28 a3=0 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.802000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.802000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.802000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe36413b90 a2=28 a3=0 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.802000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.802000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.802000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe36413b70 a2=28 a3=0 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.802000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.802000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.802000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe36413b60 a2=28 a3=0 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.802000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.802000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.802000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe36413b90 a2=28 a3=0 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.802000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.802000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.802000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe36413b70 a2=28 a3=0 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.802000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.802000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.802000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe36413b90 a2=28 a3=0 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.802000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.802000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.802000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe36413b60 a2=28 a3=0 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.802000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.802000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.802000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe36413bd0 a2=28 a3=0 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.802000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe36413980 a2=50 a3=1 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.803000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit: BPF prog-id=28 op=LOAD May 8 00:41:06.803000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe36413980 a2=94 a3=5 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.803000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.803000 audit: BPF prog-id=28 op=UNLOAD May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe36413a30 a2=50 a3=1 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.803000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe36413b50 a2=4 a3=38 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.803000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { confidentiality } for pid=4807 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 8 00:41:06.803000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe36413ba0 a2=94 a3=6 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.803000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { confidentiality } for pid=4807 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 8 00:41:06.803000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe36413350 a2=94 a3=83 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.803000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { perfmon } for pid=4807 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.803000 audit[4807]: AVC avc: denied { confidentiality } for pid=4807 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 8 00:41:06.803000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe36413350 a2=94 a3=83 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.803000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.804000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.804000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe36414d90 a2=10 a3=208 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.804000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.804000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.804000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe36414c30 a2=10 a3=3 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.804000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.804000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.804000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe36414bd0 a2=10 a3=3 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.804000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.804000 audit[4807]: AVC avc: denied { bpf } for pid=4807 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 8 00:41:06.804000 audit[4807]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe36414bd0 a2=10 a3=7 items=0 ppid=4564 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.804000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 8 00:41:06.810000 audit: BPF prog-id=23 op=UNLOAD May 8 00:41:06.872000 audit[4874]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=4874 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:41:06.872000 audit[4874]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd12ce2f20 a2=0 a3=7ffd12ce2f0c items=0 ppid=4564 pid=4874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.872000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:41:06.877000 audit[4873]: NETFILTER_CFG table=nat:98 family=2 entries=15 op=nft_register_chain pid=4873 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:41:06.877000 audit[4873]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffcbfb87190 a2=0 a3=7ffcbfb8717c items=0 ppid=4564 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.877000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:41:06.882000 audit[4872]: NETFILTER_CFG table=raw:99 family=2 entries=21 op=nft_register_chain pid=4872 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:41:06.882000 audit[4872]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff54f2f320 a2=0 a3=7fff54f2f30c items=0 ppid=4564 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.882000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:41:06.884000 audit[4877]: NETFILTER_CFG table=filter:100 family=2 entries=69 op=nft_register_chain pid=4877 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:41:06.884000 audit[4877]: SYSCALL arch=c000003e syscall=46 success=yes exit=36404 a0=3 a1=7fff2e17dc50 a2=0 a3=7fff2e17dc3c items=0 ppid=4564 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:06.884000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:41:06.983653 kubelet[2259]: E0508 00:41:06.983343 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:07.007000 audit[4884]: NETFILTER_CFG table=filter:101 family=2 entries=16 op=nft_register_rule pid=4884 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:07.007000 audit[4884]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc5ee35fa0 a2=0 a3=7ffc5ee35f8c items=0 ppid=2447 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:07.007000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:07.013000 audit[4884]: NETFILTER_CFG table=nat:102 family=2 entries=14 op=nft_register_rule pid=4884 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:07.013000 audit[4884]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc5ee35fa0 a2=0 a3=0 items=0 ppid=2447 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:07.013000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:07.411386 env[1314]: time="2025-05-08T00:41:07.411231247Z" level=info msg="StopPodSandbox for \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\"" May 8 00:41:07.529865 kubelet[2259]: I0508 00:41:07.529204 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xrfkq" podStartSLOduration=78.529177177 podStartE2EDuration="1m18.529177177s" podCreationTimestamp="2025-05-08 00:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:41:06.995756605 +0000 UTC m=+91.682167337" watchObservedRunningTime="2025-05-08 00:41:07.529177177 +0000 UTC m=+92.215587919" May 8 00:41:07.567577 env[1314]: 2025-05-08 00:41:07.529 [INFO][4901] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" May 8 00:41:07.567577 env[1314]: 2025-05-08 00:41:07.530 [INFO][4901] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" iface="eth0" netns="/var/run/netns/cni-a1f8effe-4faf-d75e-6577-86e0d7fffd25" May 8 00:41:07.567577 env[1314]: 2025-05-08 00:41:07.531 [INFO][4901] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" iface="eth0" netns="/var/run/netns/cni-a1f8effe-4faf-d75e-6577-86e0d7fffd25" May 8 00:41:07.567577 env[1314]: 2025-05-08 00:41:07.531 [INFO][4901] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" iface="eth0" netns="/var/run/netns/cni-a1f8effe-4faf-d75e-6577-86e0d7fffd25" May 8 00:41:07.567577 env[1314]: 2025-05-08 00:41:07.531 [INFO][4901] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" May 8 00:41:07.567577 env[1314]: 2025-05-08 00:41:07.531 [INFO][4901] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" May 8 00:41:07.567577 env[1314]: 2025-05-08 00:41:07.551 [INFO][4910] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" HandleID="k8s-pod-network.9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" Workload="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" May 8 00:41:07.567577 env[1314]: 2025-05-08 00:41:07.551 [INFO][4910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:07.567577 env[1314]: 2025-05-08 00:41:07.551 [INFO][4910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:07.567577 env[1314]: 2025-05-08 00:41:07.561 [WARNING][4910] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" HandleID="k8s-pod-network.9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" Workload="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" May 8 00:41:07.567577 env[1314]: 2025-05-08 00:41:07.562 [INFO][4910] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" HandleID="k8s-pod-network.9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" Workload="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" May 8 00:41:07.567577 env[1314]: 2025-05-08 00:41:07.563 [INFO][4910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:07.567577 env[1314]: 2025-05-08 00:41:07.566 [INFO][4901] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" May 8 00:41:07.570819 systemd[1]: run-netns-cni\x2da1f8effe\x2d4faf\x2dd75e\x2d6577\x2d86e0d7fffd25.mount: Deactivated successfully. May 8 00:41:07.571783 env[1314]: time="2025-05-08T00:41:07.571732945Z" level=info msg="TearDown network for sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\" successfully" May 8 00:41:07.571888 env[1314]: time="2025-05-08T00:41:07.571781557Z" level=info msg="StopPodSandbox for \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\" returns successfully" May 8 00:41:07.572264 kubelet[2259]: E0508 00:41:07.572219 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:07.572692 env[1314]: time="2025-05-08T00:41:07.572661790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-89lsx,Uid:e9d7454a-993f-4132-8ced-f8cdba985c53,Namespace:kube-system,Attempt:1,}" May 8 00:41:07.709094 systemd-networkd[1082]: cali92e491f9aaa: Link UP May 8 00:41:07.711396 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali92e491f9aaa: link becomes ready May 8 00:41:07.712122 systemd-networkd[1082]: cali92e491f9aaa: Gained carrier May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.623 [INFO][4919] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0 coredns-7db6d8ff4d- kube-system e9d7454a-993f-4132-8ced-f8cdba985c53 1123 0 2025-05-08 00:39:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-89lsx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali92e491f9aaa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-89lsx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--89lsx-" May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.623 [INFO][4919] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-89lsx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.658 [INFO][4934] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" HandleID="k8s-pod-network.b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" Workload="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.667 [INFO][4934] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" HandleID="k8s-pod-network.b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" Workload="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fcac0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-89lsx", "timestamp":"2025-05-08 00:41:07.658792604 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.667 [INFO][4934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.668 [INFO][4934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.668 [INFO][4934] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.669 [INFO][4934] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" host="localhost" May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.675 [INFO][4934] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.679 [INFO][4934] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.681 [INFO][4934] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.685 [INFO][4934] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.685 [INFO][4934] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" host="localhost" May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.686 [INFO][4934] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.693 [INFO][4934] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" host="localhost" May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.703 [INFO][4934] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" host="localhost" May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.704 [INFO][4934] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" host="localhost" May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.704 [INFO][4934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:07.733434 env[1314]: 2025-05-08 00:41:07.704 [INFO][4934] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" HandleID="k8s-pod-network.b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" Workload="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" May 8 00:41:07.734354 env[1314]: 2025-05-08 00:41:07.706 [INFO][4919] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-89lsx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e9d7454a-993f-4132-8ced-f8cdba985c53", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-89lsx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92e491f9aaa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:07.734354 env[1314]: 2025-05-08 00:41:07.706 [INFO][4919] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-89lsx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" May 8 00:41:07.734354 env[1314]: 2025-05-08 00:41:07.706 [INFO][4919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92e491f9aaa ContainerID="b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-89lsx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" May 8 00:41:07.734354 env[1314]: 2025-05-08 00:41:07.710 [INFO][4919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-89lsx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" May 8 00:41:07.734354 env[1314]: 2025-05-08 00:41:07.716 [INFO][4919] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-89lsx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e9d7454a-993f-4132-8ced-f8cdba985c53", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d", Pod:"coredns-7db6d8ff4d-89lsx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92e491f9aaa", MAC:"42:49:0e:0c:03:25", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:07.734354 env[1314]: 2025-05-08 00:41:07.729 [INFO][4919] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-89lsx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" May 8 00:41:07.742000 audit[4957]: NETFILTER_CFG table=filter:103 family=2 entries=30 op=nft_register_chain pid=4957 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:41:07.742000 audit[4957]: SYSCALL arch=c000003e syscall=46 success=yes exit=17032 a0=3 a1=7fffe62d9510 a2=0 a3=7fffe62d94fc items=0 ppid=4564 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:07.742000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:41:07.750521 env[1314]: time="2025-05-08T00:41:07.750422108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:41:07.750521 env[1314]: time="2025-05-08T00:41:07.750481712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:41:07.750521 env[1314]: time="2025-05-08T00:41:07.750493264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:41:07.750976 env[1314]: time="2025-05-08T00:41:07.750908232Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d pid=4966 runtime=io.containerd.runc.v2 May 8 00:41:07.756963 systemd-networkd[1082]: cali758d0da9acf: Gained IPv6LL May 8 00:41:07.783451 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:41:07.808197 env[1314]: time="2025-05-08T00:41:07.808142678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-89lsx,Uid:e9d7454a-993f-4132-8ced-f8cdba985c53,Namespace:kube-system,Attempt:1,} returns sandbox id \"b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d\"" May 8 00:41:07.808864 kubelet[2259]: E0508 00:41:07.808822 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:07.812297 env[1314]: time="2025-05-08T00:41:07.812223464Z" level=info msg="CreateContainer within sandbox \"b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:41:07.821309 systemd-networkd[1082]: vxlan.calico: Gained IPv6LL May 8 00:41:07.827439 env[1314]: time="2025-05-08T00:41:07.827389267Z" level=info msg="CreateContainer within sandbox \"b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6ac4e5144117d9c8e0235098eceefaccdad759f507c51e5e351089b959b412a\"" May 8 00:41:07.827989 env[1314]: time="2025-05-08T00:41:07.827954171Z" level=info msg="StartContainer for \"a6ac4e5144117d9c8e0235098eceefaccdad759f507c51e5e351089b959b412a\"" May 8 00:41:07.873422 env[1314]: time="2025-05-08T00:41:07.873360616Z" level=info msg="StartContainer for \"a6ac4e5144117d9c8e0235098eceefaccdad759f507c51e5e351089b959b412a\" returns successfully" May 8 00:41:07.988247 kubelet[2259]: E0508 00:41:07.988104 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:07.988543 kubelet[2259]: E0508 00:41:07.988509 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:08.013000 audit[5040]: NETFILTER_CFG table=filter:104 family=2 entries=16 op=nft_register_rule pid=5040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:08.013000 audit[5040]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe4016b0e0 a2=0 a3=7ffe4016b0cc items=0 ppid=2447 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:08.013000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:08.018031 kubelet[2259]: I0508 00:41:08.017340 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-89lsx" podStartSLOduration=79.017316025 podStartE2EDuration="1m19.017316025s" podCreationTimestamp="2025-05-08 00:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:41:08.001603065 +0000 UTC m=+92.688013838" watchObservedRunningTime="2025-05-08 00:41:08.017316025 +0000 UTC m=+92.703726767" May 8 00:41:08.019000 audit[5040]: NETFILTER_CFG table=nat:105 family=2 entries=14 op=nft_register_rule pid=5040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:08.019000 audit[5040]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe4016b0e0 a2=0 a3=0 items=0 ppid=2447 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:08.019000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:08.028000 audit[5042]: NETFILTER_CFG table=filter:106 family=2 entries=13 op=nft_register_rule pid=5042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:08.028000 audit[5042]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe3e1ae2d0 a2=0 a3=7ffe3e1ae2bc items=0 ppid=2447 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:08.028000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:08.034000 audit[5042]: NETFILTER_CFG table=nat:107 family=2 entries=35 op=nft_register_chain pid=5042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:08.034000 audit[5042]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe3e1ae2d0 a2=0 a3=7ffe3e1ae2bc items=0 ppid=2447 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:08.034000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:08.410951 env[1314]: time="2025-05-08T00:41:08.410788177Z" level=info msg="StopPodSandbox for \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\"" May 8 00:41:08.584416 env[1314]: 2025-05-08 00:41:08.484 [INFO][5060] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" May 8 00:41:08.584416 env[1314]: 2025-05-08 00:41:08.484 [INFO][5060] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" iface="eth0" netns="/var/run/netns/cni-b3a944f4-0654-f48a-145a-19773a97c7ff" May 8 00:41:08.584416 env[1314]: 2025-05-08 00:41:08.484 [INFO][5060] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" iface="eth0" netns="/var/run/netns/cni-b3a944f4-0654-f48a-145a-19773a97c7ff" May 8 00:41:08.584416 env[1314]: 2025-05-08 00:41:08.485 [INFO][5060] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" iface="eth0" netns="/var/run/netns/cni-b3a944f4-0654-f48a-145a-19773a97c7ff" May 8 00:41:08.584416 env[1314]: 2025-05-08 00:41:08.485 [INFO][5060] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" May 8 00:41:08.584416 env[1314]: 2025-05-08 00:41:08.485 [INFO][5060] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" May 8 00:41:08.584416 env[1314]: 2025-05-08 00:41:08.505 [INFO][5069] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" HandleID="k8s-pod-network.76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" Workload="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" May 8 00:41:08.584416 env[1314]: 2025-05-08 00:41:08.505 [INFO][5069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:08.584416 env[1314]: 2025-05-08 00:41:08.505 [INFO][5069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:08.584416 env[1314]: 2025-05-08 00:41:08.579 [WARNING][5069] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" HandleID="k8s-pod-network.76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" Workload="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" May 8 00:41:08.584416 env[1314]: 2025-05-08 00:41:08.579 [INFO][5069] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" HandleID="k8s-pod-network.76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" Workload="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" May 8 00:41:08.584416 env[1314]: 2025-05-08 00:41:08.581 [INFO][5069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:08.584416 env[1314]: 2025-05-08 00:41:08.582 [INFO][5060] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" May 8 00:41:08.585383 env[1314]: time="2025-05-08T00:41:08.584556447Z" level=info msg="TearDown network for sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\" successfully" May 8 00:41:08.585383 env[1314]: time="2025-05-08T00:41:08.584597926Z" level=info msg="StopPodSandbox for \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\" returns successfully" May 8 00:41:08.585383 env[1314]: time="2025-05-08T00:41:08.585239495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655fb5665b-8tf24,Uid:3dd70705-8c14-4d08-9f87-66c93e2ace47,Namespace:calico-apiserver,Attempt:1,}" May 8 00:41:08.587317 systemd[1]: run-netns-cni\x2db3a944f4\x2d0654\x2df48a\x2d145a\x2d19773a97c7ff.mount: Deactivated successfully. May 8 00:41:08.989977 kubelet[2259]: E0508 00:41:08.989872 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:08.990479 kubelet[2259]: E0508 00:41:08.990213 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:09.174000 audit[5088]: NETFILTER_CFG table=filter:108 family=2 entries=10 op=nft_register_rule pid=5088 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:09.174000 audit[5088]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff1c002a80 a2=0 a3=7fff1c002a6c items=0 ppid=2447 pid=5088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:09.174000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:09.197000 audit[5088]: NETFILTER_CFG table=nat:109 family=2 entries=56 op=nft_register_chain pid=5088 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:09.197000 audit[5088]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff1c002a80 a2=0 a3=7fff1c002a6c items=0 ppid=2447 pid=5088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:09.197000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:09.229404 systemd-networkd[1082]: cali92e491f9aaa: Gained IPv6LL May 8 00:41:09.279929 systemd-networkd[1082]: cali123c5c113b4: Link UP May 8 00:41:09.281675 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 00:41:09.281743 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali123c5c113b4: link becomes ready May 8 00:41:09.281899 systemd-networkd[1082]: cali123c5c113b4: Gained carrier May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.203 [INFO][5076] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0 calico-apiserver-655fb5665b- calico-apiserver 3dd70705-8c14-4d08-9f87-66c93e2ace47 1154 0 2025-05-08 00:39:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:655fb5665b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-655fb5665b-8tf24 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali123c5c113b4 [] []}} ContainerID="4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" Namespace="calico-apiserver" Pod="calico-apiserver-655fb5665b-8tf24" WorkloadEndpoint="localhost-k8s-calico--apiserver--655fb5665b--8tf24-" May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.203 [INFO][5076] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" Namespace="calico-apiserver" Pod="calico-apiserver-655fb5665b-8tf24" WorkloadEndpoint="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.238 [INFO][5095] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" HandleID="k8s-pod-network.4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" Workload="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.247 [INFO][5095] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" HandleID="k8s-pod-network.4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" Workload="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-655fb5665b-8tf24", "timestamp":"2025-05-08 00:41:09.238712001 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.247 [INFO][5095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.247 [INFO][5095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.247 [INFO][5095] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.249 [INFO][5095] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" host="localhost" May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.253 [INFO][5095] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.259 [INFO][5095] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.262 [INFO][5095] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.264 [INFO][5095] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.264 [INFO][5095] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" host="localhost" May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.265 [INFO][5095] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.269 [INFO][5095] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" host="localhost" May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.275 [INFO][5095] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" host="localhost" May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.275 [INFO][5095] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" host="localhost" May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.275 [INFO][5095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:09.294286 env[1314]: 2025-05-08 00:41:09.275 [INFO][5095] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" HandleID="k8s-pod-network.4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" Workload="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" May 8 00:41:09.295189 env[1314]: 2025-05-08 00:41:09.277 [INFO][5076] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" Namespace="calico-apiserver" Pod="calico-apiserver-655fb5665b-8tf24" WorkloadEndpoint="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0", GenerateName:"calico-apiserver-655fb5665b-", Namespace:"calico-apiserver", SelfLink:"", UID:"3dd70705-8c14-4d08-9f87-66c93e2ace47", ResourceVersion:"1154", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655fb5665b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-655fb5665b-8tf24", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali123c5c113b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:09.295189 env[1314]: 2025-05-08 00:41:09.277 [INFO][5076] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" Namespace="calico-apiserver" Pod="calico-apiserver-655fb5665b-8tf24" WorkloadEndpoint="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" May 8 00:41:09.295189 env[1314]: 2025-05-08 00:41:09.277 [INFO][5076] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali123c5c113b4 ContainerID="4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" Namespace="calico-apiserver" Pod="calico-apiserver-655fb5665b-8tf24" WorkloadEndpoint="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" May 8 00:41:09.295189 env[1314]: 2025-05-08 00:41:09.281 [INFO][5076] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" Namespace="calico-apiserver" Pod="calico-apiserver-655fb5665b-8tf24" WorkloadEndpoint="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" May 8 00:41:09.295189 env[1314]: 2025-05-08 00:41:09.282 [INFO][5076] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" Namespace="calico-apiserver" Pod="calico-apiserver-655fb5665b-8tf24" WorkloadEndpoint="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0", GenerateName:"calico-apiserver-655fb5665b-", Namespace:"calico-apiserver", SelfLink:"", UID:"3dd70705-8c14-4d08-9f87-66c93e2ace47", ResourceVersion:"1154", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655fb5665b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a", Pod:"calico-apiserver-655fb5665b-8tf24", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali123c5c113b4", MAC:"2a:14:6d:04:97:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:09.295189 env[1314]: 2025-05-08 00:41:09.291 [INFO][5076] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a" Namespace="calico-apiserver" Pod="calico-apiserver-655fb5665b-8tf24" WorkloadEndpoint="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" May 8 00:41:09.302000 audit[5119]: NETFILTER_CFG table=filter:110 family=2 entries=48 op=nft_register_chain pid=5119 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:41:09.302000 audit[5119]: SYSCALL arch=c000003e syscall=46 success=yes exit=25868 a0=3 a1=7fffccbdea00 a2=0 a3=7fffccbde9ec items=0 ppid=4564 pid=5119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:09.302000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:41:09.306324 env[1314]: time="2025-05-08T00:41:09.306261960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:41:09.306404 env[1314]: time="2025-05-08T00:41:09.306320391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:41:09.306404 env[1314]: time="2025-05-08T00:41:09.306337153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:41:09.306625 env[1314]: time="2025-05-08T00:41:09.306587148Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a pid=5124 runtime=io.containerd.runc.v2 May 8 00:41:09.343905 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:41:09.370613 env[1314]: time="2025-05-08T00:41:09.370570371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655fb5665b-8tf24,Uid:3dd70705-8c14-4d08-9f87-66c93e2ace47,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a\"" May 8 00:41:09.372596 env[1314]: time="2025-05-08T00:41:09.372564139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:41:09.993449 kubelet[2259]: E0508 00:41:09.993408 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:10.411422 env[1314]: time="2025-05-08T00:41:10.411271826Z" level=info msg="StopPodSandbox for \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\"" May 8 00:41:10.411422 env[1314]: time="2025-05-08T00:41:10.411346798Z" level=info msg="StopPodSandbox for \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\"" May 8 00:41:10.767664 env[1314]: 2025-05-08 00:41:10.532 [INFO][5190] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" May 8 00:41:10.767664 env[1314]: 2025-05-08 00:41:10.532 [INFO][5190] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" iface="eth0" netns="/var/run/netns/cni-d156d968-df4f-e89f-1008-318d0e4807ff" May 8 00:41:10.767664 env[1314]: 2025-05-08 00:41:10.532 [INFO][5190] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" iface="eth0" netns="/var/run/netns/cni-d156d968-df4f-e89f-1008-318d0e4807ff" May 8 00:41:10.767664 env[1314]: 2025-05-08 00:41:10.533 [INFO][5190] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" iface="eth0" netns="/var/run/netns/cni-d156d968-df4f-e89f-1008-318d0e4807ff" May 8 00:41:10.767664 env[1314]: 2025-05-08 00:41:10.533 [INFO][5190] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" May 8 00:41:10.767664 env[1314]: 2025-05-08 00:41:10.533 [INFO][5190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" May 8 00:41:10.767664 env[1314]: 2025-05-08 00:41:10.554 [INFO][5207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" HandleID="k8s-pod-network.6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" Workload="localhost-k8s-csi--node--driver--rrhhb-eth0" May 8 00:41:10.767664 env[1314]: 2025-05-08 00:41:10.555 [INFO][5207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:10.767664 env[1314]: 2025-05-08 00:41:10.555 [INFO][5207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:10.767664 env[1314]: 2025-05-08 00:41:10.762 [WARNING][5207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" HandleID="k8s-pod-network.6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" Workload="localhost-k8s-csi--node--driver--rrhhb-eth0" May 8 00:41:10.767664 env[1314]: 2025-05-08 00:41:10.762 [INFO][5207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" HandleID="k8s-pod-network.6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" Workload="localhost-k8s-csi--node--driver--rrhhb-eth0" May 8 00:41:10.767664 env[1314]: 2025-05-08 00:41:10.764 [INFO][5207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:10.767664 env[1314]: 2025-05-08 00:41:10.766 [INFO][5190] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" May 8 00:41:10.768500 env[1314]: time="2025-05-08T00:41:10.767878375Z" level=info msg="TearDown network for sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\" successfully" May 8 00:41:10.768500 env[1314]: time="2025-05-08T00:41:10.767921588Z" level=info msg="StopPodSandbox for \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\" returns successfully" May 8 00:41:10.768710 env[1314]: time="2025-05-08T00:41:10.768682653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rrhhb,Uid:a1c58f86-7966-473c-98f3-e00538745ae1,Namespace:calico-system,Attempt:1,}" May 8 00:41:10.772869 systemd[1]: run-netns-cni\x2dd156d968\x2ddf4f\x2de89f\x2d1008\x2d318d0e4807ff.mount: Deactivated successfully. May 8 00:41:10.797261 env[1314]: 2025-05-08 00:41:10.763 [INFO][5191] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" May 8 00:41:10.797261 env[1314]: 2025-05-08 00:41:10.763 [INFO][5191] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" iface="eth0" netns="/var/run/netns/cni-e159f4e8-84c4-21a0-29ab-925c110e5146" May 8 00:41:10.797261 env[1314]: 2025-05-08 00:41:10.764 [INFO][5191] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" iface="eth0" netns="/var/run/netns/cni-e159f4e8-84c4-21a0-29ab-925c110e5146" May 8 00:41:10.797261 env[1314]: 2025-05-08 00:41:10.764 [INFO][5191] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" iface="eth0" netns="/var/run/netns/cni-e159f4e8-84c4-21a0-29ab-925c110e5146" May 8 00:41:10.797261 env[1314]: 2025-05-08 00:41:10.764 [INFO][5191] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" May 8 00:41:10.797261 env[1314]: 2025-05-08 00:41:10.764 [INFO][5191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" May 8 00:41:10.797261 env[1314]: 2025-05-08 00:41:10.786 [INFO][5216] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" HandleID="k8s-pod-network.a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" Workload="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" May 8 00:41:10.797261 env[1314]: 2025-05-08 00:41:10.786 [INFO][5216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:10.797261 env[1314]: 2025-05-08 00:41:10.786 [INFO][5216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:10.797261 env[1314]: 2025-05-08 00:41:10.792 [WARNING][5216] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" HandleID="k8s-pod-network.a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" Workload="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" May 8 00:41:10.797261 env[1314]: 2025-05-08 00:41:10.792 [INFO][5216] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" HandleID="k8s-pod-network.a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" Workload="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" May 8 00:41:10.797261 env[1314]: 2025-05-08 00:41:10.794 [INFO][5216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:10.797261 env[1314]: 2025-05-08 00:41:10.795 [INFO][5191] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" May 8 00:41:10.800934 env[1314]: time="2025-05-08T00:41:10.797449807Z" level=info msg="TearDown network for sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\" successfully" May 8 00:41:10.800934 env[1314]: time="2025-05-08T00:41:10.797506163Z" level=info msg="StopPodSandbox for \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\" returns successfully" May 8 00:41:10.800934 env[1314]: time="2025-05-08T00:41:10.798210812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-575f4bf5b7-jhlnt,Uid:5444d20d-8a4f-4e35-a777-fef99f439552,Namespace:calico-system,Attempt:1,}" May 8 00:41:10.799894 systemd[1]: run-netns-cni\x2de159f4e8\x2d84c4\x2d21a0\x2d29ab\x2d925c110e5146.mount: Deactivated successfully. May 8 00:41:11.085042 systemd-networkd[1082]: cali123c5c113b4: Gained IPv6LL May 8 00:41:11.141618 systemd[1]: Started sshd@22-10.0.0.16:22-10.0.0.1:48144.service. May 8 00:41:11.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.16:22-10.0.0.1:48144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:11.147518 kernel: kauditd_printk_skb: 546 callbacks suppressed May 8 00:41:11.147608 kernel: audit: type=1130 audit(1746664871.140:515): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.16:22-10.0.0.1:48144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:11.176000 audit[5223]: USER_ACCT pid=5223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:11.177485 sshd[5223]: Accepted publickey for core from 10.0.0.1 port 48144 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:41:11.180081 sshd[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:41:11.178000 audit[5223]: CRED_ACQ pid=5223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:11.184780 systemd-logind[1294]: New session 23 of user core. May 8 00:41:11.185435 kernel: audit: type=1101 audit(1746664871.176:516): pid=5223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:11.185489 kernel: audit: type=1103 audit(1746664871.178:517): pid=5223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:11.185522 kernel: audit: type=1006 audit(1746664871.178:518): pid=5223 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 May 8 00:41:11.186029 systemd[1]: Started session-23.scope. May 8 00:41:11.192935 kernel: audit: type=1300 audit(1746664871.178:518): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd52d4e730 a2=3 a3=0 items=0 ppid=1 pid=5223 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:11.178000 audit[5223]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd52d4e730 a2=3 a3=0 items=0 ppid=1 pid=5223 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:11.178000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:41:11.201254 kernel: audit: type=1327 audit(1746664871.178:518): proctitle=737368643A20636F7265205B707269765D May 8 00:41:11.201335 kernel: audit: type=1105 audit(1746664871.193:519): pid=5223 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:11.193000 audit[5223]: USER_START pid=5223 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:11.205372 kernel: audit: type=1103 audit(1746664871.195:520): pid=5239 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:11.195000 audit[5239]: CRED_ACQ pid=5239 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:11.379379 systemd-networkd[1082]: cali68e86e77a49: Link UP May 8 00:41:11.398374 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 00:41:11.398503 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali68e86e77a49: link becomes ready May 8 00:41:11.399256 systemd-networkd[1082]: cali68e86e77a49: Gained carrier May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.222 [INFO][5228] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0 calico-kube-controllers-575f4bf5b7- calico-system 5444d20d-8a4f-4e35-a777-fef99f439552 1173 0 2025-05-08 00:39:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:575f4bf5b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-575f4bf5b7-jhlnt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali68e86e77a49 [] []}} ContainerID="528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" Namespace="calico-system" Pod="calico-kube-controllers-575f4bf5b7-jhlnt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-" May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.222 [INFO][5228] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" Namespace="calico-system" Pod="calico-kube-controllers-575f4bf5b7-jhlnt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.258 [INFO][5257] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" HandleID="k8s-pod-network.528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" Workload="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.268 [INFO][5257] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" HandleID="k8s-pod-network.528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" Workload="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001320e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-575f4bf5b7-jhlnt", "timestamp":"2025-05-08 00:41:11.258256916 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.268 [INFO][5257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.268 [INFO][5257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.268 [INFO][5257] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.270 [INFO][5257] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" host="localhost" May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.276 [INFO][5257] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.285 [INFO][5257] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.287 [INFO][5257] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.289 [INFO][5257] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.289 [INFO][5257] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" host="localhost" May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.290 [INFO][5257] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36 May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.332 [INFO][5257] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" host="localhost" May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.372 [INFO][5257] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" host="localhost" May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.372 [INFO][5257] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" host="localhost" May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.372 [INFO][5257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:11.516445 env[1314]: 2025-05-08 00:41:11.372 [INFO][5257] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" HandleID="k8s-pod-network.528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" Workload="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" May 8 00:41:11.517406 env[1314]: 2025-05-08 00:41:11.375 [INFO][5228] cni-plugin/k8s.go 386: Populated endpoint ContainerID="528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" Namespace="calico-system" Pod="calico-kube-controllers-575f4bf5b7-jhlnt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0", GenerateName:"calico-kube-controllers-575f4bf5b7-", Namespace:"calico-system", SelfLink:"", UID:"5444d20d-8a4f-4e35-a777-fef99f439552", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"575f4bf5b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-575f4bf5b7-jhlnt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali68e86e77a49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:11.517406 env[1314]: 2025-05-08 00:41:11.375 [INFO][5228] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" Namespace="calico-system" Pod="calico-kube-controllers-575f4bf5b7-jhlnt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" May 8 00:41:11.517406 env[1314]: 2025-05-08 00:41:11.375 [INFO][5228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68e86e77a49 ContainerID="528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" Namespace="calico-system" Pod="calico-kube-controllers-575f4bf5b7-jhlnt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" May 8 00:41:11.517406 env[1314]: 2025-05-08 00:41:11.399 [INFO][5228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" Namespace="calico-system" Pod="calico-kube-controllers-575f4bf5b7-jhlnt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" May 8 00:41:11.517406 env[1314]: 2025-05-08 00:41:11.400 [INFO][5228] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" Namespace="calico-system" Pod="calico-kube-controllers-575f4bf5b7-jhlnt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0", GenerateName:"calico-kube-controllers-575f4bf5b7-", Namespace:"calico-system", SelfLink:"", UID:"5444d20d-8a4f-4e35-a777-fef99f439552", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"575f4bf5b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36", Pod:"calico-kube-controllers-575f4bf5b7-jhlnt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali68e86e77a49", MAC:"be:5b:b4:50:9f:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:11.517406 env[1314]: 2025-05-08 00:41:11.514 [INFO][5228] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36" Namespace="calico-system" Pod="calico-kube-controllers-575f4bf5b7-jhlnt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" May 8 00:41:11.519461 sshd[5223]: pam_unix(sshd:session): session closed for user core May 8 00:41:11.521000 audit[5223]: USER_END pid=5223 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:11.526000 audit[5298]: NETFILTER_CFG table=filter:111 family=2 entries=46 op=nft_register_chain pid=5298 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:41:11.552226 systemd-logind[1294]: Session 23 logged out. Waiting for processes to exit. May 8 00:41:11.552979 systemd[1]: sshd@22-10.0.0.16:22-10.0.0.1:48144.service: Deactivated successfully. May 8 00:41:11.554305 kernel: audit: type=1106 audit(1746664871.521:521): pid=5223 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:11.554446 kernel: audit: type=1325 audit(1746664871.526:522): table=filter:111 family=2 entries=46 op=nft_register_chain pid=5298 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:41:11.526000 audit[5298]: SYSCALL arch=c000003e syscall=46 success=yes exit=22712 a0=3 a1=7ffee2e05720 a2=0 a3=7ffee2e0570c items=0 ppid=4564 pid=5298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:11.526000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:41:11.527000 audit[5223]: CRED_DISP pid=5223 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:11.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.16:22-10.0.0.1:48144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:11.553924 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:41:11.555705 systemd-logind[1294]: Removed session 23. May 8 00:41:11.797749 systemd-networkd[1082]: califffc6941c6c: Link UP May 8 00:41:11.800895 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califffc6941c6c: link becomes ready May 8 00:41:11.800576 systemd-networkd[1082]: califffc6941c6c: Gained carrier May 8 00:41:11.812226 env[1314]: time="2025-05-08T00:41:11.812128463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:41:11.812226 env[1314]: time="2025-05-08T00:41:11.812173969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:41:11.812226 env[1314]: time="2025-05-08T00:41:11.812187525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:41:11.812508 env[1314]: time="2025-05-08T00:41:11.812382706Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36 pid=5309 runtime=io.containerd.runc.v2 May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.240 [INFO][5242] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rrhhb-eth0 csi-node-driver- calico-system a1c58f86-7966-473c-98f3-e00538745ae1 1172 0 2025-05-08 00:39:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rrhhb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califffc6941c6c [] []}} ContainerID="0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" Namespace="calico-system" Pod="csi-node-driver-rrhhb" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrhhb-" May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.240 [INFO][5242] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" Namespace="calico-system" Pod="csi-node-driver-rrhhb" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrhhb-eth0" May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.276 [INFO][5274] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" HandleID="k8s-pod-network.0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" Workload="localhost-k8s-csi--node--driver--rrhhb-eth0" May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.285 [INFO][5274] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" HandleID="k8s-pod-network.0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" Workload="localhost-k8s-csi--node--driver--rrhhb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000339540), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rrhhb", "timestamp":"2025-05-08 00:41:11.27682031 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.285 [INFO][5274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.372 [INFO][5274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.372 [INFO][5274] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.495 [INFO][5274] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" host="localhost" May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.556 [INFO][5274] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.561 [INFO][5274] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.563 [INFO][5274] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.565 [INFO][5274] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.565 [INFO][5274] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" host="localhost" May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.566 [INFO][5274] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574 May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.776 [INFO][5274] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" host="localhost" May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.793 [INFO][5274] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" host="localhost" May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.793 [INFO][5274] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" host="localhost" May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.793 [INFO][5274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:11.841097 env[1314]: 2025-05-08 00:41:11.793 [INFO][5274] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" HandleID="k8s-pod-network.0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" Workload="localhost-k8s-csi--node--driver--rrhhb-eth0" May 8 00:41:11.841712 env[1314]: 2025-05-08 00:41:11.795 [INFO][5242] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" Namespace="calico-system" Pod="csi-node-driver-rrhhb" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrhhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rrhhb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a1c58f86-7966-473c-98f3-e00538745ae1", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rrhhb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califffc6941c6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:11.841712 env[1314]: 2025-05-08 00:41:11.795 [INFO][5242] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" Namespace="calico-system" Pod="csi-node-driver-rrhhb" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrhhb-eth0" May 8 00:41:11.841712 env[1314]: 2025-05-08 00:41:11.796 [INFO][5242] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califffc6941c6c ContainerID="0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" Namespace="calico-system" Pod="csi-node-driver-rrhhb" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrhhb-eth0" May 8 00:41:11.841712 env[1314]: 2025-05-08 00:41:11.800 [INFO][5242] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" Namespace="calico-system" Pod="csi-node-driver-rrhhb" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrhhb-eth0" May 8 00:41:11.841712 env[1314]: 2025-05-08 00:41:11.800 [INFO][5242] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" Namespace="calico-system" Pod="csi-node-driver-rrhhb" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrhhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rrhhb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a1c58f86-7966-473c-98f3-e00538745ae1", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574", Pod:"csi-node-driver-rrhhb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califffc6941c6c", MAC:"96:cf:a6:f9:e7:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:11.841712 env[1314]: 2025-05-08 00:41:11.838 [INFO][5242] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574" Namespace="calico-system" Pod="csi-node-driver-rrhhb" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrhhb-eth0" May 8 00:41:11.848000 audit[5336]: NETFILTER_CFG table=filter:112 family=2 entries=46 op=nft_register_chain pid=5336 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:41:11.848000 audit[5336]: SYSCALL arch=c000003e syscall=46 success=yes exit=22204 a0=3 a1=7ffcefdfc6a0 a2=0 a3=7ffcefdfc68c items=0 ppid=4564 pid=5336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:11.848000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:41:11.855359 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:41:11.876037 env[1314]: time="2025-05-08T00:41:11.875966272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:41:11.876262 env[1314]: time="2025-05-08T00:41:11.876236455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:41:11.876377 env[1314]: time="2025-05-08T00:41:11.876354068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:41:11.876790 env[1314]: time="2025-05-08T00:41:11.876712950Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574 pid=5353 runtime=io.containerd.runc.v2 May 8 00:41:11.898378 env[1314]: time="2025-05-08T00:41:11.898338010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-575f4bf5b7-jhlnt,Uid:5444d20d-8a4f-4e35-a777-fef99f439552,Namespace:calico-system,Attempt:1,} returns sandbox id \"528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36\"" May 8 00:41:11.910136 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:41:11.921342 env[1314]: time="2025-05-08T00:41:11.921299118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rrhhb,Uid:a1c58f86-7966-473c-98f3-e00538745ae1,Namespace:calico-system,Attempt:1,} returns sandbox id \"0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574\"" May 8 00:41:12.771272 systemd[1]: run-containerd-runc-k8s.io-0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574-runc.9mOy3A.mount: Deactivated successfully. May 8 00:41:13.069407 systemd-networkd[1082]: cali68e86e77a49: Gained IPv6LL May 8 00:41:13.254540 env[1314]: time="2025-05-08T00:41:13.254479392Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:41:13.256666 env[1314]: time="2025-05-08T00:41:13.256635497Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:41:13.258236 env[1314]: time="2025-05-08T00:41:13.258201852Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:41:13.259904 env[1314]: time="2025-05-08T00:41:13.259853910Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:41:13.260316 env[1314]: time="2025-05-08T00:41:13.260282965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:41:13.261663 env[1314]: time="2025-05-08T00:41:13.261614072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 8 00:41:13.264186 env[1314]: time="2025-05-08T00:41:13.264133388Z" level=info msg="CreateContainer within sandbox \"4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:41:13.453975 systemd-networkd[1082]: califffc6941c6c: Gained IPv6LL May 8 00:41:13.635171 env[1314]: time="2025-05-08T00:41:13.635081204Z" level=info msg="CreateContainer within sandbox \"4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"728120a1a10a3841fe1b90f4bdf761b77b1c87cf10d19295fea5e0e1725b2fdd\"" May 8 00:41:13.635796 env[1314]: time="2025-05-08T00:41:13.635725399Z" level=info msg="StartContainer for \"728120a1a10a3841fe1b90f4bdf761b77b1c87cf10d19295fea5e0e1725b2fdd\"" May 8 00:41:13.701459 env[1314]: time="2025-05-08T00:41:13.701402976Z" level=info msg="StartContainer for \"728120a1a10a3841fe1b90f4bdf761b77b1c87cf10d19295fea5e0e1725b2fdd\" returns successfully" May 8 00:41:14.024721 kubelet[2259]: I0508 00:41:14.024637 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-655fb5665b-8tf24" podStartSLOduration=73.135352069 podStartE2EDuration="1m17.024604337s" podCreationTimestamp="2025-05-08 00:39:57 +0000 UTC" firstStartedPulling="2025-05-08 00:41:09.372107121 +0000 UTC m=+94.058517863" lastFinishedPulling="2025-05-08 00:41:13.261359358 +0000 UTC m=+97.947770131" observedRunningTime="2025-05-08 00:41:14.022063922 +0000 UTC m=+98.708474674" watchObservedRunningTime="2025-05-08 00:41:14.024604337 +0000 UTC m=+98.711015079" May 8 00:41:14.039000 audit[5437]: NETFILTER_CFG table=filter:113 family=2 entries=10 op=nft_register_rule pid=5437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:14.039000 audit[5437]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd9d15dd60 a2=0 a3=7ffd9d15dd4c items=0 ppid=2447 pid=5437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:14.039000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:14.043000 audit[5437]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=5437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:14.043000 audit[5437]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd9d15dd60 a2=0 a3=7ffd9d15dd4c items=0 ppid=2447 pid=5437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:14.043000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:15.096000 audit[5440]: NETFILTER_CFG table=filter:115 family=2 entries=9 op=nft_register_rule pid=5440 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:15.096000 audit[5440]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc7dc41d20 a2=0 a3=7ffc7dc41d0c items=0 ppid=2447 pid=5440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:15.096000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:15.103000 audit[5440]: NETFILTER_CFG table=nat:116 family=2 entries=27 op=nft_register_chain pid=5440 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:15.103000 audit[5440]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffc7dc41d20 a2=0 a3=7ffc7dc41d0c items=0 ppid=2447 pid=5440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:15.103000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:16.019164 env[1314]: time="2025-05-08T00:41:16.019074671Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:41:16.027394 env[1314]: time="2025-05-08T00:41:16.027343791Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:41:16.029283 env[1314]: time="2025-05-08T00:41:16.029239471Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:41:16.033065 env[1314]: time="2025-05-08T00:41:16.033023345Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:41:16.033522 env[1314]: time="2025-05-08T00:41:16.033490553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 8 00:41:16.034496 env[1314]: time="2025-05-08T00:41:16.034468339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:41:16.041878 env[1314]: time="2025-05-08T00:41:16.041820468Z" level=info msg="CreateContainer within sandbox \"528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 00:41:16.057056 env[1314]: time="2025-05-08T00:41:16.056986113Z" level=info msg="CreateContainer within sandbox \"528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cb95fdc351c2593391d5a8dece22b799705043909f36dc3c66401d3042657ddb\"" May 8 00:41:16.057662 env[1314]: time="2025-05-08T00:41:16.057604847Z" level=info msg="StartContainer for \"cb95fdc351c2593391d5a8dece22b799705043909f36dc3c66401d3042657ddb\"" May 8 00:41:16.127882 env[1314]: time="2025-05-08T00:41:16.127802179Z" level=info msg="StartContainer for \"cb95fdc351c2593391d5a8dece22b799705043909f36dc3c66401d3042657ddb\" returns successfully" May 8 00:41:16.411453 env[1314]: time="2025-05-08T00:41:16.411314148Z" level=info msg="StopPodSandbox for \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\"" May 8 00:41:16.496646 env[1314]: 2025-05-08 00:41:16.464 [INFO][5495] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:41:16.496646 env[1314]: 2025-05-08 00:41:16.464 [INFO][5495] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" iface="eth0" netns="/var/run/netns/cni-e94946ff-f197-4d7e-d0e3-6f925e9bea86" May 8 00:41:16.496646 env[1314]: 2025-05-08 00:41:16.464 [INFO][5495] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" iface="eth0" netns="/var/run/netns/cni-e94946ff-f197-4d7e-d0e3-6f925e9bea86" May 8 00:41:16.496646 env[1314]: 2025-05-08 00:41:16.464 [INFO][5495] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" iface="eth0" netns="/var/run/netns/cni-e94946ff-f197-4d7e-d0e3-6f925e9bea86" May 8 00:41:16.496646 env[1314]: 2025-05-08 00:41:16.464 [INFO][5495] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:41:16.496646 env[1314]: 2025-05-08 00:41:16.464 [INFO][5495] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:41:16.496646 env[1314]: 2025-05-08 00:41:16.485 [INFO][5503] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" HandleID="k8s-pod-network.9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" Workload="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" May 8 00:41:16.496646 env[1314]: 2025-05-08 00:41:16.485 [INFO][5503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:16.496646 env[1314]: 2025-05-08 00:41:16.485 [INFO][5503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:16.496646 env[1314]: 2025-05-08 00:41:16.492 [WARNING][5503] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" HandleID="k8s-pod-network.9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" Workload="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" May 8 00:41:16.496646 env[1314]: 2025-05-08 00:41:16.492 [INFO][5503] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" HandleID="k8s-pod-network.9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" Workload="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" May 8 00:41:16.496646 env[1314]: 2025-05-08 00:41:16.493 [INFO][5503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:16.496646 env[1314]: 2025-05-08 00:41:16.494 [INFO][5495] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:41:16.497170 env[1314]: time="2025-05-08T00:41:16.496784148Z" level=info msg="TearDown network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\" successfully" May 8 00:41:16.497170 env[1314]: time="2025-05-08T00:41:16.496823081Z" level=info msg="StopPodSandbox for \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\" returns successfully" May 8 00:41:16.497903 env[1314]: time="2025-05-08T00:41:16.497792342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655fb5665b-b5526,Uid:d7ae0688-a473-448c-b8b9-7f2261bb0d9a,Namespace:calico-apiserver,Attempt:1,}" May 8 00:41:16.528150 kernel: kauditd_printk_skb: 19 callbacks suppressed May 8 00:41:16.528282 kernel: audit: type=1130 audit(1746664876.521:530): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.16:22-10.0.0.1:47810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:16.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.16:22-10.0.0.1:47810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:16.522261 systemd[1]: Started sshd@23-10.0.0.16:22-10.0.0.1:47810.service. May 8 00:41:16.558000 audit[5523]: USER_ACCT pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:16.559859 sshd[5523]: Accepted publickey for core from 10.0.0.1 port 47810 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:41:16.563000 audit[5523]: CRED_ACQ pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:16.565217 sshd[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:41:16.576207 kernel: audit: type=1101 audit(1746664876.558:531): pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:16.576343 kernel: audit: type=1103 audit(1746664876.563:532): pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:16.576375 kernel: audit: type=1006 audit(1746664876.563:533): pid=5523 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 May 8 00:41:16.576400 kernel: audit: type=1300 audit(1746664876.563:533): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdfb60e020 a2=3 a3=0 items=0 ppid=1 pid=5523 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:16.563000 audit[5523]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdfb60e020 a2=3 a3=0 items=0 ppid=1 pid=5523 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:16.571680 systemd[1]: Started session-24.scope. May 8 00:41:16.572331 systemd-logind[1294]: New session 24 of user core. May 8 00:41:16.563000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:41:16.579234 kernel: audit: type=1327 audit(1746664876.563:533): proctitle=737368643A20636F7265205B707269765D May 8 00:41:16.579285 kernel: audit: type=1105 audit(1746664876.577:534): pid=5523 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:16.577000 audit[5523]: USER_START pid=5523 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:16.579000 audit[5537]: CRED_ACQ pid=5537 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:16.588063 kernel: audit: type=1103 audit(1746664876.579:535): pid=5537 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:16.633977 systemd-networkd[1082]: calic90e0b5b7d8: Link UP May 8 00:41:16.636684 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 00:41:16.636813 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic90e0b5b7d8: link becomes ready May 8 00:41:16.637022 systemd-networkd[1082]: calic90e0b5b7d8: Gained carrier May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.551 [INFO][5511] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0 calico-apiserver-655fb5665b- calico-apiserver d7ae0688-a473-448c-b8b9-7f2261bb0d9a 1228 0 2025-05-08 00:39:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:655fb5665b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-655fb5665b-b5526 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic90e0b5b7d8 [] []}} ContainerID="e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" Namespace="calico-apiserver" Pod="calico-apiserver-655fb5665b-b5526" WorkloadEndpoint="localhost-k8s-calico--apiserver--655fb5665b--b5526-" May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.551 [INFO][5511] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" Namespace="calico-apiserver" Pod="calico-apiserver-655fb5665b-b5526" WorkloadEndpoint="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.587 [INFO][5530] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" HandleID="k8s-pod-network.e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" Workload="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.598 [INFO][5530] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" HandleID="k8s-pod-network.e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" Workload="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038ecc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-655fb5665b-b5526", "timestamp":"2025-05-08 00:41:16.587662336 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.598 [INFO][5530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.598 [INFO][5530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.598 [INFO][5530] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.600 [INFO][5530] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" host="localhost" May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.604 [INFO][5530] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.609 [INFO][5530] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.611 [INFO][5530] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.614 [INFO][5530] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.614 [INFO][5530] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" host="localhost" May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.615 [INFO][5530] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139 May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.622 [INFO][5530] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" host="localhost" May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.628 [INFO][5530] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" host="localhost" May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.628 [INFO][5530] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" host="localhost" May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.628 [INFO][5530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:16.653456 env[1314]: 2025-05-08 00:41:16.628 [INFO][5530] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" HandleID="k8s-pod-network.e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" Workload="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" May 8 00:41:16.654608 env[1314]: 2025-05-08 00:41:16.631 [INFO][5511] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" Namespace="calico-apiserver" Pod="calico-apiserver-655fb5665b-b5526" WorkloadEndpoint="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0", GenerateName:"calico-apiserver-655fb5665b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d7ae0688-a473-448c-b8b9-7f2261bb0d9a", ResourceVersion:"1228", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655fb5665b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-655fb5665b-b5526", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic90e0b5b7d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:16.654608 env[1314]: 2025-05-08 00:41:16.632 [INFO][5511] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" Namespace="calico-apiserver" Pod="calico-apiserver-655fb5665b-b5526" WorkloadEndpoint="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" May 8 00:41:16.654608 env[1314]: 2025-05-08 00:41:16.632 [INFO][5511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic90e0b5b7d8 ContainerID="e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" Namespace="calico-apiserver" Pod="calico-apiserver-655fb5665b-b5526" WorkloadEndpoint="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" May 8 00:41:16.654608 env[1314]: 2025-05-08 00:41:16.637 [INFO][5511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" Namespace="calico-apiserver" Pod="calico-apiserver-655fb5665b-b5526" WorkloadEndpoint="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" May 8 00:41:16.654608 env[1314]: 2025-05-08 00:41:16.638 [INFO][5511] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" Namespace="calico-apiserver" Pod="calico-apiserver-655fb5665b-b5526" WorkloadEndpoint="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0", GenerateName:"calico-apiserver-655fb5665b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d7ae0688-a473-448c-b8b9-7f2261bb0d9a", ResourceVersion:"1228", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655fb5665b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139", Pod:"calico-apiserver-655fb5665b-b5526", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic90e0b5b7d8", MAC:"16:dd:3f:23:fe:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:16.654608 env[1314]: 2025-05-08 00:41:16.649 [INFO][5511] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139" Namespace="calico-apiserver" Pod="calico-apiserver-655fb5665b-b5526" WorkloadEndpoint="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" May 8 00:41:16.676508 env[1314]: time="2025-05-08T00:41:16.675680975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:41:16.676508 env[1314]: time="2025-05-08T00:41:16.675719238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:41:16.676508 env[1314]: time="2025-05-08T00:41:16.675728044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:41:16.676508 env[1314]: time="2025-05-08T00:41:16.675982868Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139 pid=5569 runtime=io.containerd.runc.v2 May 8 00:41:16.678000 audit[5580]: NETFILTER_CFG table=filter:117 family=2 entries=50 op=nft_register_chain pid=5580 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:41:16.678000 audit[5580]: SYSCALL arch=c000003e syscall=46 success=yes exit=25080 a0=3 a1=7ffdc49c82a0 a2=0 a3=7ffdc49c828c items=0 ppid=4564 pid=5580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:16.687100 kernel: audit: type=1325 audit(1746664876.678:536): table=filter:117 family=2 entries=50 op=nft_register_chain pid=5580 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 8 00:41:16.687189 kernel: audit: type=1300 audit(1746664876.678:536): arch=c000003e syscall=46 success=yes exit=25080 a0=3 a1=7ffdc49c82a0 a2=0 a3=7ffdc49c828c items=0 ppid=4564 pid=5580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:16.678000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 8 00:41:16.706522 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:41:16.732000 audit[5523]: USER_END pid=5523 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:16.732301 sshd[5523]: pam_unix(sshd:session): session closed for user core May 8 00:41:16.732000 audit[5523]: CRED_DISP pid=5523 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:16.735343 systemd[1]: Started sshd@24-10.0.0.16:22-10.0.0.1:47820.service. May 8 00:41:16.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.16:22-10.0.0.1:47820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:16.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.16:22-10.0.0.1:47810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:16.736035 systemd[1]: sshd@23-10.0.0.16:22-10.0.0.1:47810.service: Deactivated successfully. May 8 00:41:16.737479 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:41:16.738177 systemd-logind[1294]: Session 24 logged out. Waiting for processes to exit. May 8 00:41:16.739556 systemd-logind[1294]: Removed session 24. May 8 00:41:16.740923 env[1314]: time="2025-05-08T00:41:16.740885638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655fb5665b-b5526,Uid:d7ae0688-a473-448c-b8b9-7f2261bb0d9a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139\"" May 8 00:41:16.744014 env[1314]: time="2025-05-08T00:41:16.743967559Z" level=info msg="CreateContainer within sandbox \"e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:41:16.764000 env[1314]: time="2025-05-08T00:41:16.763924632Z" level=info msg="CreateContainer within sandbox \"e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"759219feb9f7ddf7d86e1a53cb8cbef5884cd23e806264ef8df114fd2ef1e340\"" May 8 00:41:16.764670 env[1314]: time="2025-05-08T00:41:16.764543927Z" level=info msg="StartContainer for \"759219feb9f7ddf7d86e1a53cb8cbef5884cd23e806264ef8df114fd2ef1e340\"" May 8 00:41:16.768000 audit[5606]: USER_ACCT pid=5606 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:16.769263 sshd[5606]: Accepted publickey for core from 10.0.0.1 port 47820 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:41:16.770000 audit[5606]: CRED_ACQ pid=5606 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:16.770000 audit[5606]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe43a02540 a2=3 a3=0 items=0 ppid=1 pid=5606 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:16.770000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:41:16.771619 sshd[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:41:16.776764 systemd[1]: Started session-25.scope. May 8 00:41:16.777552 systemd-logind[1294]: New session 25 of user core. May 8 00:41:16.783000 audit[5606]: USER_START pid=5606 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:16.784000 audit[5627]: CRED_ACQ pid=5627 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:16.823509 env[1314]: time="2025-05-08T00:41:16.823446564Z" level=info msg="StartContainer for \"759219feb9f7ddf7d86e1a53cb8cbef5884cd23e806264ef8df114fd2ef1e340\" returns successfully" May 8 00:41:17.036616 kubelet[2259]: I0508 00:41:17.036294 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-575f4bf5b7-jhlnt" podStartSLOduration=75.902704494 podStartE2EDuration="1m20.036267066s" podCreationTimestamp="2025-05-08 00:39:57 +0000 UTC" firstStartedPulling="2025-05-08 00:41:11.900756835 +0000 UTC m=+96.587167577" lastFinishedPulling="2025-05-08 00:41:16.034319416 +0000 UTC m=+100.720730149" observedRunningTime="2025-05-08 00:41:17.035814326 +0000 UTC m=+101.722225058" watchObservedRunningTime="2025-05-08 00:41:17.036267066 +0000 UTC m=+101.722677798" May 8 00:41:17.051830 kubelet[2259]: I0508 00:41:17.051720 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-655fb5665b-b5526" podStartSLOduration=81.051695627 podStartE2EDuration="1m21.051695627s" podCreationTimestamp="2025-05-08 00:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:41:17.051448207 +0000 UTC m=+101.737858970" watchObservedRunningTime="2025-05-08 00:41:17.051695627 +0000 UTC m=+101.738106369" May 8 00:41:17.066512 systemd[1]: run-netns-cni\x2de94946ff\x2df197\x2d4d7e\x2dd0e3\x2d6f925e9bea86.mount: Deactivated successfully. May 8 00:41:17.090000 audit[5679]: NETFILTER_CFG table=filter:118 family=2 entries=8 op=nft_register_rule pid=5679 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:17.090000 audit[5679]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdd96c9590 a2=0 a3=7ffdd96c957c items=0 ppid=2447 pid=5679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:17.090000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:17.095000 audit[5679]: NETFILTER_CFG table=nat:119 family=2 entries=30 op=nft_register_rule pid=5679 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:17.095000 audit[5679]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffdd96c9590 a2=0 a3=7ffdd96c957c items=0 ppid=2447 pid=5679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:17.095000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:17.116687 sshd[5606]: pam_unix(sshd:session): session closed for user core May 8 00:41:17.121067 systemd[1]: Started sshd@25-10.0.0.16:22-10.0.0.1:47828.service. May 8 00:41:17.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.16:22-10.0.0.1:47828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:17.121000 audit[5606]: USER_END pid=5606 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:17.121000 audit[5606]: CRED_DISP pid=5606 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:17.124940 systemd-logind[1294]: Session 25 logged out. Waiting for processes to exit. May 8 00:41:17.126435 systemd[1]: sshd@24-10.0.0.16:22-10.0.0.1:47820.service: Deactivated successfully. May 8 00:41:17.127237 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:41:17.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.16:22-10.0.0.1:47820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:17.128435 systemd-logind[1294]: Removed session 25. May 8 00:41:17.401000 audit[5684]: USER_ACCT pid=5684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:17.403101 sshd[5684]: Accepted publickey for core from 10.0.0.1 port 47828 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:41:17.402000 audit[5684]: CRED_ACQ pid=5684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:17.403000 audit[5684]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2be9d610 a2=3 a3=0 items=0 ppid=1 pid=5684 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:17.403000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:41:17.404153 sshd[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:41:17.408117 systemd-logind[1294]: New session 26 of user core. May 8 00:41:17.408948 systemd[1]: Started session-26.scope. May 8 00:41:17.413000 audit[5684]: USER_START pid=5684 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:17.414000 audit[5689]: CRED_ACQ pid=5689 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:18.189087 systemd-networkd[1082]: calic90e0b5b7d8: Gained IPv6LL May 8 00:41:18.431457 env[1314]: time="2025-05-08T00:41:18.431379477Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:41:18.439734 env[1314]: time="2025-05-08T00:41:18.439619229Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:41:18.443967 env[1314]: time="2025-05-08T00:41:18.443918550Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:41:18.447408 env[1314]: time="2025-05-08T00:41:18.447365975Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:41:18.447941 env[1314]: time="2025-05-08T00:41:18.447898506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 8 00:41:18.450519 env[1314]: time="2025-05-08T00:41:18.450466541Z" level=info msg="CreateContainer within sandbox \"0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:41:18.485342 env[1314]: time="2025-05-08T00:41:18.485264579Z" level=info msg="CreateContainer within sandbox \"0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"85113ceb13277bb6bb6df9e84c75b18e5842890f34a527893695385b4a0697b7\"" May 8 00:41:18.486273 env[1314]: time="2025-05-08T00:41:18.486250942Z" level=info msg="StartContainer for \"85113ceb13277bb6bb6df9e84c75b18e5842890f34a527893695385b4a0697b7\"" May 8 00:41:18.585418 systemd[1]: run-containerd-runc-k8s.io-85113ceb13277bb6bb6df9e84c75b18e5842890f34a527893695385b4a0697b7-runc.SrFNFc.mount: Deactivated successfully. May 8 00:41:18.613000 audit[5736]: NETFILTER_CFG table=filter:120 family=2 entries=8 op=nft_register_rule pid=5736 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:18.613000 audit[5736]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdab5f1320 a2=0 a3=7ffdab5f130c items=0 ppid=2447 pid=5736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:18.613000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:18.620000 audit[5736]: NETFILTER_CFG table=nat:121 family=2 entries=34 op=nft_register_chain pid=5736 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:18.620000 audit[5736]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffdab5f1320 a2=0 a3=7ffdab5f130c items=0 ppid=2447 pid=5736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:18.620000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:18.674201 env[1314]: time="2025-05-08T00:41:18.674142543Z" level=info msg="StartContainer for \"85113ceb13277bb6bb6df9e84c75b18e5842890f34a527893695385b4a0697b7\" returns successfully" May 8 00:41:18.675710 env[1314]: time="2025-05-08T00:41:18.675682386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:41:20.007000 audit[5748]: NETFILTER_CFG table=filter:122 family=2 entries=8 op=nft_register_rule pid=5748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:20.007000 audit[5748]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff30b84ec0 a2=0 a3=7fff30b84eac items=0 ppid=2447 pid=5748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:20.007000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:20.014000 audit[5748]: NETFILTER_CFG table=nat:123 family=2 entries=22 op=nft_register_rule pid=5748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:20.014000 audit[5748]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff30b84ec0 a2=0 a3=7fff30b84eac items=0 ppid=2447 pid=5748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:20.014000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:20.029000 audit[5750]: NETFILTER_CFG table=filter:124 family=2 entries=20 op=nft_register_rule pid=5750 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:20.029000 audit[5750]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffeb11e1d30 a2=0 a3=7ffeb11e1d1c items=0 ppid=2447 pid=5750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:20.029000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:20.038000 audit[5750]: NETFILTER_CFG table=nat:125 family=2 entries=22 op=nft_register_rule pid=5750 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:20.038000 audit[5750]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffeb11e1d30 a2=0 a3=0 items=0 ppid=2447 pid=5750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:20.038000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:20.439978 sshd[5684]: pam_unix(sshd:session): session closed for user core May 8 00:41:20.442000 audit[5684]: USER_END pid=5684 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:20.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.16:22-10.0.0.1:47836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:20.442000 audit[5684]: CRED_DISP pid=5684 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:20.443285 systemd[1]: Started sshd@26-10.0.0.16:22-10.0.0.1:47836.service. May 8 00:41:20.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.16:22-10.0.0.1:47828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:20.444929 systemd[1]: sshd@25-10.0.0.16:22-10.0.0.1:47828.service: Deactivated successfully. May 8 00:41:20.446704 systemd[1]: session-26.scope: Deactivated successfully. May 8 00:41:20.447612 systemd-logind[1294]: Session 26 logged out. Waiting for processes to exit. May 8 00:41:20.448722 systemd-logind[1294]: Removed session 26. May 8 00:41:20.475000 audit[5753]: USER_ACCT pid=5753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:20.476916 sshd[5753]: Accepted publickey for core from 10.0.0.1 port 47836 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:41:20.476000 audit[5753]: CRED_ACQ pid=5753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:20.476000 audit[5753]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc2451eb0 a2=3 a3=0 items=0 ppid=1 pid=5753 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:20.476000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:41:20.478000 sshd[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:41:20.481928 systemd-logind[1294]: New session 27 of user core. May 8 00:41:20.482640 systemd[1]: Started session-27.scope. May 8 00:41:20.486000 audit[5753]: USER_START pid=5753 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:20.487000 audit[5758]: CRED_ACQ pid=5758 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:20.942168 sshd[5753]: pam_unix(sshd:session): session closed for user core May 8 00:41:20.944000 audit[5753]: USER_END pid=5753 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:20.944000 audit[5753]: CRED_DISP pid=5753 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:20.945504 systemd[1]: Started sshd@27-10.0.0.16:22-10.0.0.1:47844.service. May 8 00:41:20.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.16:22-10.0.0.1:47844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:20.947022 systemd[1]: sshd@26-10.0.0.16:22-10.0.0.1:47836.service: Deactivated successfully. May 8 00:41:20.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.16:22-10.0.0.1:47836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:20.948192 systemd[1]: session-27.scope: Deactivated successfully. May 8 00:41:20.957694 systemd-logind[1294]: Session 27 logged out. Waiting for processes to exit. May 8 00:41:20.958940 systemd-logind[1294]: Removed session 27. May 8 00:41:21.031351 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 47844 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:41:21.030000 audit[5765]: USER_ACCT pid=5765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:21.031000 audit[5765]: CRED_ACQ pid=5765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:21.031000 audit[5765]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffffaae23d0 a2=3 a3=0 items=0 ppid=1 pid=5765 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:21.031000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:41:21.033024 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:41:21.041972 systemd[1]: Started session-28.scope. May 8 00:41:21.043006 systemd-logind[1294]: New session 28 of user core. May 8 00:41:21.049000 audit[5765]: USER_START pid=5765 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:21.050000 audit[5770]: CRED_ACQ pid=5770 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:21.173320 sshd[5765]: pam_unix(sshd:session): session closed for user core May 8 00:41:21.174000 audit[5765]: USER_END pid=5765 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:21.174000 audit[5765]: CRED_DISP pid=5765 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:21.177024 systemd[1]: sshd@27-10.0.0.16:22-10.0.0.1:47844.service: Deactivated successfully. May 8 00:41:21.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.16:22-10.0.0.1:47844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:21.178348 systemd[1]: session-28.scope: Deactivated successfully. May 8 00:41:21.178372 systemd-logind[1294]: Session 28 logged out. Waiting for processes to exit. May 8 00:41:21.179789 systemd-logind[1294]: Removed session 28. May 8 00:41:21.308547 env[1314]: time="2025-05-08T00:41:21.308460133Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:41:21.310710 env[1314]: time="2025-05-08T00:41:21.310657003Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:41:21.312518 env[1314]: time="2025-05-08T00:41:21.312469843Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:41:21.314059 env[1314]: time="2025-05-08T00:41:21.314023441Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:41:21.314568 env[1314]: time="2025-05-08T00:41:21.314545342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 8 00:41:21.316824 env[1314]: time="2025-05-08T00:41:21.316784522Z" level=info msg="CreateContainer within sandbox \"0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:41:21.332370 env[1314]: time="2025-05-08T00:41:21.332319284Z" level=info msg="CreateContainer within sandbox \"0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2933d581898d45f318671a2e1f43e6c3e1f24dfea07a478e52ef914f18ab6f10\"" May 8 00:41:21.332973 env[1314]: time="2025-05-08T00:41:21.332932719Z" level=info msg="StartContainer for \"2933d581898d45f318671a2e1f43e6c3e1f24dfea07a478e52ef914f18ab6f10\"" May 8 00:41:21.495969 env[1314]: time="2025-05-08T00:41:21.495911570Z" level=info msg="StartContainer for \"2933d581898d45f318671a2e1f43e6c3e1f24dfea07a478e52ef914f18ab6f10\" returns successfully" May 8 00:41:21.600067 kubelet[2259]: I0508 00:41:21.599937 2259 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:41:21.600067 kubelet[2259]: I0508 00:41:21.599998 2259 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:41:26.175265 systemd[1]: Started sshd@28-10.0.0.16:22-10.0.0.1:41232.service. May 8 00:41:26.180199 kernel: kauditd_printk_skb: 72 callbacks suppressed May 8 00:41:26.180337 kernel: audit: type=1130 audit(1746664886.174:584): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.16:22-10.0.0.1:41232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:26.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.16:22-10.0.0.1:41232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:26.210000 audit[5818]: USER_ACCT pid=5818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:26.211342 sshd[5818]: Accepted publickey for core from 10.0.0.1 port 41232 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:41:26.214000 audit[5818]: CRED_ACQ pid=5818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:26.215551 sshd[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:41:26.220540 systemd-logind[1294]: New session 29 of user core. May 8 00:41:26.221171 systemd[1]: Started session-29.scope. May 8 00:41:26.237598 kernel: audit: type=1101 audit(1746664886.210:585): pid=5818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:26.237746 kernel: audit: type=1103 audit(1746664886.214:586): pid=5818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:26.237771 kernel: audit: type=1006 audit(1746664886.214:587): pid=5818 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 May 8 00:41:26.214000 audit[5818]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffddc2de8a0 a2=3 a3=0 items=0 ppid=1 pid=5818 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:26.249329 kernel: audit: type=1300 audit(1746664886.214:587): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffddc2de8a0 a2=3 a3=0 items=0 ppid=1 pid=5818 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:26.249417 kernel: audit: type=1327 audit(1746664886.214:587): proctitle=737368643A20636F7265205B707269765D May 8 00:41:26.214000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:41:26.225000 audit[5818]: USER_START pid=5818 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:26.255276 kernel: audit: type=1105 audit(1746664886.225:588): pid=5818 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:26.255364 kernel: audit: type=1103 audit(1746664886.227:589): pid=5823 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:26.227000 audit[5823]: CRED_ACQ pid=5823 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:26.339816 sshd[5818]: pam_unix(sshd:session): session closed for user core May 8 00:41:26.339000 audit[5818]: USER_END pid=5818 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:26.342404 systemd[1]: sshd@28-10.0.0.16:22-10.0.0.1:41232.service: Deactivated successfully. May 8 00:41:26.343223 systemd[1]: session-29.scope: Deactivated successfully. May 8 00:41:26.346482 systemd-logind[1294]: Session 29 logged out. Waiting for processes to exit. May 8 00:41:26.347163 systemd-logind[1294]: Removed session 29. May 8 00:41:26.368864 kernel: audit: type=1106 audit(1746664886.339:590): pid=5818 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:26.340000 audit[5818]: CRED_DISP pid=5818 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:26.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.16:22-10.0.0.1:41232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:26.373920 kernel: audit: type=1104 audit(1746664886.340:591): pid=5818 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:28.362445 kubelet[2259]: E0508 00:41:28.362396 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:28.607959 kubelet[2259]: I0508 00:41:28.607881 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rrhhb" podStartSLOduration=82.215194352 podStartE2EDuration="1m31.60785928s" podCreationTimestamp="2025-05-08 00:39:57 +0000 UTC" firstStartedPulling="2025-05-08 00:41:11.922717823 +0000 UTC m=+96.609128565" lastFinishedPulling="2025-05-08 00:41:21.315382751 +0000 UTC m=+106.001793493" observedRunningTime="2025-05-08 00:41:22.058031944 +0000 UTC m=+106.744442686" watchObservedRunningTime="2025-05-08 00:41:28.60785928 +0000 UTC m=+113.294270052" May 8 00:41:30.411234 kubelet[2259]: E0508 00:41:30.411191 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:30.722000 audit[5866]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=5866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:30.722000 audit[5866]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff83491c10 a2=0 a3=7fff83491bfc items=0 ppid=2447 pid=5866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:30.722000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:30.727000 audit[5866]: NETFILTER_CFG table=nat:127 family=2 entries=106 op=nft_register_chain pid=5866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 8 00:41:30.727000 audit[5866]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7fff83491c10 a2=0 a3=7fff83491bfc items=0 ppid=2447 pid=5866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:30.727000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 8 00:41:31.343461 systemd[1]: Started sshd@29-10.0.0.16:22-10.0.0.1:41238.service. May 8 00:41:31.346688 kernel: kauditd_printk_skb: 7 callbacks suppressed May 8 00:41:31.346740 kernel: audit: type=1130 audit(1746664891.342:595): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.16:22-10.0.0.1:41238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:31.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.16:22-10.0.0.1:41238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:31.375000 audit[5868]: USER_ACCT pid=5868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:31.376635 sshd[5868]: Accepted publickey for core from 10.0.0.1 port 41238 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:41:31.378303 sshd[5868]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:41:31.377000 audit[5868]: CRED_ACQ pid=5868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:31.383066 systemd-logind[1294]: New session 30 of user core. May 8 00:41:31.383331 systemd[1]: Started session-30.scope. May 8 00:41:31.384323 kernel: audit: type=1101 audit(1746664891.375:596): pid=5868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:31.384498 kernel: audit: type=1103 audit(1746664891.377:597): pid=5868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:31.386800 kernel: audit: type=1006 audit(1746664891.377:598): pid=5868 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 May 8 00:41:31.377000 audit[5868]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdde984ed0 a2=3 a3=0 items=0 ppid=1 pid=5868 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:31.377000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:41:31.392674 kernel: audit: type=1300 audit(1746664891.377:598): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdde984ed0 a2=3 a3=0 items=0 ppid=1 pid=5868 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:31.392739 kernel: audit: type=1327 audit(1746664891.377:598): proctitle=737368643A20636F7265205B707269765D May 8 00:41:31.392767 kernel: audit: type=1105 audit(1746664891.388:599): pid=5868 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:31.388000 audit[5868]: USER_START pid=5868 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:31.390000 audit[5871]: CRED_ACQ pid=5871 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:31.401182 kernel: audit: type=1103 audit(1746664891.390:600): pid=5871 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:31.496440 sshd[5868]: pam_unix(sshd:session): session closed for user core May 8 00:41:31.496000 audit[5868]: USER_END pid=5868 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:31.498894 systemd[1]: sshd@29-10.0.0.16:22-10.0.0.1:41238.service: Deactivated successfully. May 8 00:41:31.500319 systemd-logind[1294]: Session 30 logged out. Waiting for processes to exit. May 8 00:41:31.500449 systemd[1]: session-30.scope: Deactivated successfully. May 8 00:41:31.501474 systemd-logind[1294]: Removed session 30. May 8 00:41:31.496000 audit[5868]: CRED_DISP pid=5868 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:31.520002 kernel: audit: type=1106 audit(1746664891.496:601): pid=5868 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:31.520076 kernel: audit: type=1104 audit(1746664891.496:602): pid=5868 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:31.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.16:22-10.0.0.1:41238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:35.391489 env[1314]: time="2025-05-08T00:41:35.391435995Z" level=info msg="StopPodSandbox for \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\"" May 8 00:41:35.474239 env[1314]: 2025-05-08 00:41:35.435 [WARNING][5897] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e9d7454a-993f-4132-8ced-f8cdba985c53", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d", Pod:"coredns-7db6d8ff4d-89lsx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92e491f9aaa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:35.474239 env[1314]: 2025-05-08 00:41:35.435 [INFO][5897] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" May 8 00:41:35.474239 env[1314]: 2025-05-08 00:41:35.435 [INFO][5897] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" iface="eth0" netns="" May 8 00:41:35.474239 env[1314]: 2025-05-08 00:41:35.435 [INFO][5897] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" May 8 00:41:35.474239 env[1314]: 2025-05-08 00:41:35.435 [INFO][5897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" May 8 00:41:35.474239 env[1314]: 2025-05-08 00:41:35.460 [INFO][5907] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" HandleID="k8s-pod-network.9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" Workload="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" May 8 00:41:35.474239 env[1314]: 2025-05-08 00:41:35.460 [INFO][5907] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:35.474239 env[1314]: 2025-05-08 00:41:35.460 [INFO][5907] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:35.474239 env[1314]: 2025-05-08 00:41:35.468 [WARNING][5907] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" HandleID="k8s-pod-network.9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" Workload="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" May 8 00:41:35.474239 env[1314]: 2025-05-08 00:41:35.468 [INFO][5907] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" HandleID="k8s-pod-network.9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" Workload="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" May 8 00:41:35.474239 env[1314]: 2025-05-08 00:41:35.469 [INFO][5907] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:35.474239 env[1314]: 2025-05-08 00:41:35.472 [INFO][5897] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" May 8 00:41:35.475257 env[1314]: time="2025-05-08T00:41:35.474260323Z" level=info msg="TearDown network for sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\" successfully" May 8 00:41:35.475257 env[1314]: time="2025-05-08T00:41:35.474290900Z" level=info msg="StopPodSandbox for \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\" returns successfully" May 8 00:41:35.475772 env[1314]: time="2025-05-08T00:41:35.475737173Z" level=info msg="RemovePodSandbox for \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\"" May 8 00:41:35.475863 env[1314]: time="2025-05-08T00:41:35.475783320Z" level=info msg="Forcibly stopping sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\"" May 8 00:41:35.645291 env[1314]: 2025-05-08 00:41:35.515 [WARNING][5929] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e9d7454a-993f-4132-8ced-f8cdba985c53", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b537d03449746ae1e2bf5eba6532897f6e4c0960e6eeec374d4e197bdef4a13d", Pod:"coredns-7db6d8ff4d-89lsx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92e491f9aaa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:35.645291 env[1314]: 2025-05-08 00:41:35.515 [INFO][5929] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" May 8 00:41:35.645291 env[1314]: 2025-05-08 00:41:35.515 [INFO][5929] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" iface="eth0" netns="" May 8 00:41:35.645291 env[1314]: 2025-05-08 00:41:35.515 [INFO][5929] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" May 8 00:41:35.645291 env[1314]: 2025-05-08 00:41:35.515 [INFO][5929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" May 8 00:41:35.645291 env[1314]: 2025-05-08 00:41:35.545 [INFO][5937] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" HandleID="k8s-pod-network.9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" Workload="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" May 8 00:41:35.645291 env[1314]: 2025-05-08 00:41:35.545 [INFO][5937] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:35.645291 env[1314]: 2025-05-08 00:41:35.545 [INFO][5937] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:35.645291 env[1314]: 2025-05-08 00:41:35.634 [WARNING][5937] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" HandleID="k8s-pod-network.9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" Workload="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" May 8 00:41:35.645291 env[1314]: 2025-05-08 00:41:35.634 [INFO][5937] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" HandleID="k8s-pod-network.9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" Workload="localhost-k8s-coredns--7db6d8ff4d--89lsx-eth0" May 8 00:41:35.645291 env[1314]: 2025-05-08 00:41:35.640 [INFO][5937] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:35.645291 env[1314]: 2025-05-08 00:41:35.642 [INFO][5929] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97" May 8 00:41:35.645291 env[1314]: time="2025-05-08T00:41:35.644769923Z" level=info msg="TearDown network for sandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\" successfully" May 8 00:41:35.703808 env[1314]: time="2025-05-08T00:41:35.703738025Z" level=info msg="RemovePodSandbox \"9d296f9070e15d088c6b537635741bbc2abba95ba39227baf389f0967c82ee97\" returns successfully" May 8 00:41:35.704463 env[1314]: time="2025-05-08T00:41:35.704418506Z" level=info msg="StopPodSandbox for \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\"" May 8 00:41:35.844914 env[1314]: 2025-05-08 00:41:35.793 [WARNING][5960] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0", GenerateName:"calico-apiserver-655fb5665b-", Namespace:"calico-apiserver", SelfLink:"", UID:"3dd70705-8c14-4d08-9f87-66c93e2ace47", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655fb5665b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a", Pod:"calico-apiserver-655fb5665b-8tf24", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali123c5c113b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:35.844914 env[1314]: 2025-05-08 00:41:35.793 [INFO][5960] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" May 8 00:41:35.844914 env[1314]: 2025-05-08 00:41:35.793 [INFO][5960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" iface="eth0" netns="" May 8 00:41:35.844914 env[1314]: 2025-05-08 00:41:35.793 [INFO][5960] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" May 8 00:41:35.844914 env[1314]: 2025-05-08 00:41:35.793 [INFO][5960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" May 8 00:41:35.844914 env[1314]: 2025-05-08 00:41:35.833 [INFO][5968] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" HandleID="k8s-pod-network.76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" Workload="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" May 8 00:41:35.844914 env[1314]: 2025-05-08 00:41:35.833 [INFO][5968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:35.844914 env[1314]: 2025-05-08 00:41:35.833 [INFO][5968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:35.844914 env[1314]: 2025-05-08 00:41:35.839 [WARNING][5968] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" HandleID="k8s-pod-network.76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" Workload="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" May 8 00:41:35.844914 env[1314]: 2025-05-08 00:41:35.839 [INFO][5968] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" HandleID="k8s-pod-network.76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" Workload="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" May 8 00:41:35.844914 env[1314]: 2025-05-08 00:41:35.841 [INFO][5968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:35.844914 env[1314]: 2025-05-08 00:41:35.843 [INFO][5960] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" May 8 00:41:35.845584 env[1314]: time="2025-05-08T00:41:35.844956617Z" level=info msg="TearDown network for sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\" successfully" May 8 00:41:35.845584 env[1314]: time="2025-05-08T00:41:35.845006752Z" level=info msg="StopPodSandbox for \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\" returns successfully" May 8 00:41:35.845584 env[1314]: time="2025-05-08T00:41:35.845542788Z" level=info msg="RemovePodSandbox for \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\"" May 8 00:41:35.845769 env[1314]: time="2025-05-08T00:41:35.845577814Z" level=info msg="Forcibly stopping sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\"" May 8 00:41:35.995967 env[1314]: 2025-05-08 00:41:35.958 [WARNING][5991] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0", GenerateName:"calico-apiserver-655fb5665b-", Namespace:"calico-apiserver", SelfLink:"", UID:"3dd70705-8c14-4d08-9f87-66c93e2ace47", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655fb5665b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ac602677900494c27f36e3a3aa42eea9eea4fa089dd7ff87188ae1b94abcc0a", Pod:"calico-apiserver-655fb5665b-8tf24", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali123c5c113b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:35.995967 env[1314]: 2025-05-08 00:41:35.959 [INFO][5991] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" May 8 00:41:35.995967 env[1314]: 2025-05-08 00:41:35.959 [INFO][5991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" iface="eth0" netns="" May 8 00:41:35.995967 env[1314]: 2025-05-08 00:41:35.959 [INFO][5991] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" May 8 00:41:35.995967 env[1314]: 2025-05-08 00:41:35.959 [INFO][5991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" May 8 00:41:35.995967 env[1314]: 2025-05-08 00:41:35.982 [INFO][6000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" HandleID="k8s-pod-network.76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" Workload="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" May 8 00:41:35.995967 env[1314]: 2025-05-08 00:41:35.982 [INFO][6000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:35.995967 env[1314]: 2025-05-08 00:41:35.982 [INFO][6000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:35.995967 env[1314]: 2025-05-08 00:41:35.990 [WARNING][6000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" HandleID="k8s-pod-network.76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" Workload="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" May 8 00:41:35.995967 env[1314]: 2025-05-08 00:41:35.990 [INFO][6000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" HandleID="k8s-pod-network.76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" Workload="localhost-k8s-calico--apiserver--655fb5665b--8tf24-eth0" May 8 00:41:35.995967 env[1314]: 2025-05-08 00:41:35.992 [INFO][6000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:35.995967 env[1314]: 2025-05-08 00:41:35.994 [INFO][5991] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d" May 8 00:41:35.996459 env[1314]: time="2025-05-08T00:41:35.996016769Z" level=info msg="TearDown network for sandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\" successfully" May 8 00:41:36.122939 env[1314]: time="2025-05-08T00:41:36.122813411Z" level=info msg="RemovePodSandbox \"76122b7cea559a9e6d38330c6708a7ef6779ae97af276c96e56bb15c4cb29e3d\" returns successfully" May 8 00:41:36.123466 env[1314]: time="2025-05-08T00:41:36.123406265Z" level=info msg="StopPodSandbox for \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\"" May 8 00:41:36.123692 env[1314]: time="2025-05-08T00:41:36.123496376Z" level=info msg="TearDown network for sandbox \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\" successfully" May 8 00:41:36.123692 env[1314]: time="2025-05-08T00:41:36.123542023Z" level=info msg="StopPodSandbox for \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\" returns successfully" May 8 00:41:36.124019 env[1314]: time="2025-05-08T00:41:36.123991465Z" level=info msg="RemovePodSandbox for \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\"" May 8 00:41:36.124076 env[1314]: time="2025-05-08T00:41:36.124022404Z" level=info msg="Forcibly stopping sandbox \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\"" May 8 00:41:36.124157 env[1314]: time="2025-05-08T00:41:36.124126421Z" level=info msg="TearDown network for sandbox \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\" successfully" May 8 00:41:36.205806 env[1314]: time="2025-05-08T00:41:36.205716439Z" level=info msg="RemovePodSandbox \"b4c1bf4ca933f0bb34cb47950a7d5f145dee27591378d557726340b88ee8b4bb\" returns successfully" May 8 00:41:36.206335 env[1314]: time="2025-05-08T00:41:36.206309024Z" level=info msg="StopPodSandbox for \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\"" May 8 00:41:36.430522 env[1314]: 2025-05-08 00:41:36.388 [WARNING][6023] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rrhhb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a1c58f86-7966-473c-98f3-e00538745ae1", ResourceVersion:"1301", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574", Pod:"csi-node-driver-rrhhb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califffc6941c6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:36.430522 env[1314]: 2025-05-08 00:41:36.388 [INFO][6023] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" May 8 00:41:36.430522 env[1314]: 2025-05-08 00:41:36.389 [INFO][6023] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" iface="eth0" netns="" May 8 00:41:36.430522 env[1314]: 2025-05-08 00:41:36.389 [INFO][6023] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" May 8 00:41:36.430522 env[1314]: 2025-05-08 00:41:36.389 [INFO][6023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" May 8 00:41:36.430522 env[1314]: 2025-05-08 00:41:36.414 [INFO][6031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" HandleID="k8s-pod-network.6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" Workload="localhost-k8s-csi--node--driver--rrhhb-eth0" May 8 00:41:36.430522 env[1314]: 2025-05-08 00:41:36.414 [INFO][6031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:36.430522 env[1314]: 2025-05-08 00:41:36.414 [INFO][6031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:36.430522 env[1314]: 2025-05-08 00:41:36.422 [WARNING][6031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" HandleID="k8s-pod-network.6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" Workload="localhost-k8s-csi--node--driver--rrhhb-eth0" May 8 00:41:36.430522 env[1314]: 2025-05-08 00:41:36.422 [INFO][6031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" HandleID="k8s-pod-network.6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" Workload="localhost-k8s-csi--node--driver--rrhhb-eth0" May 8 00:41:36.430522 env[1314]: 2025-05-08 00:41:36.425 [INFO][6031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:36.430522 env[1314]: 2025-05-08 00:41:36.427 [INFO][6023] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" May 8 00:41:36.430522 env[1314]: time="2025-05-08T00:41:36.430450627Z" level=info msg="TearDown network for sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\" successfully" May 8 00:41:36.430522 env[1314]: time="2025-05-08T00:41:36.430493268Z" level=info msg="StopPodSandbox for \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\" returns successfully" May 8 00:41:36.431394 env[1314]: time="2025-05-08T00:41:36.431231097Z" level=info msg="RemovePodSandbox for \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\"" May 8 00:41:36.431394 env[1314]: time="2025-05-08T00:41:36.431272205Z" level=info msg="Forcibly stopping sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\"" May 8 00:41:36.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.16:22-10.0.0.1:53628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:36.499942 systemd[1]: Started sshd@30-10.0.0.16:22-10.0.0.1:53628.service. May 8 00:41:36.520016 kernel: kauditd_printk_skb: 1 callbacks suppressed May 8 00:41:36.520182 kernel: audit: type=1130 audit(1746664896.499:604): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.16:22-10.0.0.1:53628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:36.541090 env[1314]: 2025-05-08 00:41:36.502 [WARNING][6053] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rrhhb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a1c58f86-7966-473c-98f3-e00538745ae1", ResourceVersion:"1301", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0648e73fd6ee2b5ebc1eb205d16aac9b1ae3b5862871b0d78abc5a44184df574", Pod:"csi-node-driver-rrhhb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califffc6941c6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:36.541090 env[1314]: 2025-05-08 00:41:36.502 [INFO][6053] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" May 8 00:41:36.541090 env[1314]: 2025-05-08 00:41:36.502 [INFO][6053] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" iface="eth0" netns="" May 8 00:41:36.541090 env[1314]: 2025-05-08 00:41:36.502 [INFO][6053] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" May 8 00:41:36.541090 env[1314]: 2025-05-08 00:41:36.502 [INFO][6053] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" May 8 00:41:36.541090 env[1314]: 2025-05-08 00:41:36.528 [INFO][6062] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" HandleID="k8s-pod-network.6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" Workload="localhost-k8s-csi--node--driver--rrhhb-eth0" May 8 00:41:36.541090 env[1314]: 2025-05-08 00:41:36.528 [INFO][6062] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:36.541090 env[1314]: 2025-05-08 00:41:36.528 [INFO][6062] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:36.541090 env[1314]: 2025-05-08 00:41:36.535 [WARNING][6062] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" HandleID="k8s-pod-network.6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" Workload="localhost-k8s-csi--node--driver--rrhhb-eth0" May 8 00:41:36.541090 env[1314]: 2025-05-08 00:41:36.535 [INFO][6062] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" HandleID="k8s-pod-network.6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" Workload="localhost-k8s-csi--node--driver--rrhhb-eth0" May 8 00:41:36.541090 env[1314]: 2025-05-08 00:41:36.537 [INFO][6062] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:36.541090 env[1314]: 2025-05-08 00:41:36.539 [INFO][6053] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688" May 8 00:41:36.541728 env[1314]: time="2025-05-08T00:41:36.541123759Z" level=info msg="TearDown network for sandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\" successfully" May 8 00:41:36.597000 audit[6060]: USER_ACCT pid=6060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:36.598321 sshd[6060]: Accepted publickey for core from 10.0.0.1 port 53628 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:41:36.632000 audit[6060]: CRED_ACQ pid=6060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:36.634246 sshd[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:41:36.640760 systemd-logind[1294]: New session 31 of user core. May 8 00:41:36.641112 systemd[1]: Started session-31.scope. May 8 00:41:36.663006 kernel: audit: type=1101 audit(1746664896.597:605): pid=6060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:36.663098 kernel: audit: type=1103 audit(1746664896.632:606): pid=6060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:36.663119 kernel: audit: type=1006 audit(1746664896.632:607): pid=6060 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 May 8 00:41:36.632000 audit[6060]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd19b17c10 a2=3 a3=0 items=0 ppid=1 pid=6060 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:36.670269 kernel: audit: type=1300 audit(1746664896.632:607): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd19b17c10 a2=3 a3=0 items=0 ppid=1 pid=6060 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:36.670340 kernel: audit: type=1327 audit(1746664896.632:607): proctitle=737368643A20636F7265205B707269765D May 8 00:41:36.632000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:41:36.671861 kernel: audit: type=1105 audit(1746664896.645:608): pid=6060 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:36.645000 audit[6060]: USER_START pid=6060 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:36.676475 kernel: audit: type=1103 audit(1746664896.646:609): pid=6071 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:36.646000 audit[6071]: CRED_ACQ pid=6071 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:36.683798 env[1314]: time="2025-05-08T00:41:36.683669614Z" level=info msg="RemovePodSandbox \"6f7548cd2d73d0329cc8b9f6d90c0766e8bce34526c3b9a3a55fb07ed978e688\" returns successfully" May 8 00:41:36.684746 env[1314]: time="2025-05-08T00:41:36.684332740Z" level=info msg="StopPodSandbox for \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\"" May 8 00:41:36.927666 sshd[6060]: pam_unix(sshd:session): session closed for user core May 8 00:41:36.927000 audit[6060]: USER_END pid=6060 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:36.931153 systemd[1]: sshd@30-10.0.0.16:22-10.0.0.1:53628.service: Deactivated successfully. May 8 00:41:36.933601 env[1314]: 2025-05-08 00:41:36.851 [WARNING][6090] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"35415a0b-9f3d-4f12-b555-b4c08d155deb", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65", Pod:"coredns-7db6d8ff4d-xrfkq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali758d0da9acf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:36.933601 env[1314]: 2025-05-08 00:41:36.852 [INFO][6090] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" May 8 00:41:36.933601 env[1314]: 2025-05-08 00:41:36.852 [INFO][6090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" iface="eth0" netns="" May 8 00:41:36.933601 env[1314]: 2025-05-08 00:41:36.852 [INFO][6090] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" May 8 00:41:36.933601 env[1314]: 2025-05-08 00:41:36.852 [INFO][6090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" May 8 00:41:36.933601 env[1314]: 2025-05-08 00:41:36.891 [INFO][6106] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" HandleID="k8s-pod-network.0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" Workload="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" May 8 00:41:36.933601 env[1314]: 2025-05-08 00:41:36.891 [INFO][6106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:36.933601 env[1314]: 2025-05-08 00:41:36.891 [INFO][6106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:36.933601 env[1314]: 2025-05-08 00:41:36.923 [WARNING][6106] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" HandleID="k8s-pod-network.0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" Workload="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" May 8 00:41:36.933601 env[1314]: 2025-05-08 00:41:36.923 [INFO][6106] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" HandleID="k8s-pod-network.0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" Workload="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" May 8 00:41:36.933601 env[1314]: 2025-05-08 00:41:36.926 [INFO][6106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:36.933601 env[1314]: 2025-05-08 00:41:36.929 [INFO][6090] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" May 8 00:41:36.933601 env[1314]: time="2025-05-08T00:41:36.933004625Z" level=info msg="TearDown network for sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\" successfully" May 8 00:41:36.933601 env[1314]: time="2025-05-08T00:41:36.933043939Z" level=info msg="StopPodSandbox for \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\" returns successfully" May 8 00:41:36.933601 env[1314]: time="2025-05-08T00:41:36.933587240Z" level=info msg="RemovePodSandbox for \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\"" May 8 00:41:36.932238 systemd[1]: session-31.scope: Deactivated successfully. May 8 00:41:36.934435 env[1314]: time="2025-05-08T00:41:36.933620052Z" level=info msg="Forcibly stopping sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\"" May 8 00:41:36.934483 systemd-logind[1294]: Session 31 logged out. Waiting for processes to exit. May 8 00:41:36.936003 systemd-logind[1294]: Removed session 31. May 8 00:41:36.927000 audit[6060]: CRED_DISP pid=6060 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:36.953123 kernel: audit: type=1106 audit(1746664896.927:610): pid=6060 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:36.953247 kernel: audit: type=1104 audit(1746664896.927:611): pid=6060 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:36.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.16:22-10.0.0.1:53628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:37.140916 env[1314]: 2025-05-08 00:41:37.097 [WARNING][6131] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"35415a0b-9f3d-4f12-b555-b4c08d155deb", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bac65dcbe9a15634f4afdf2059d75db2cba39f87850a78d390a95c7147b13c65", Pod:"coredns-7db6d8ff4d-xrfkq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali758d0da9acf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:37.140916 env[1314]: 2025-05-08 00:41:37.097 [INFO][6131] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" May 8 00:41:37.140916 env[1314]: 2025-05-08 00:41:37.097 [INFO][6131] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" iface="eth0" netns="" May 8 00:41:37.140916 env[1314]: 2025-05-08 00:41:37.097 [INFO][6131] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" May 8 00:41:37.140916 env[1314]: 2025-05-08 00:41:37.097 [INFO][6131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" May 8 00:41:37.140916 env[1314]: 2025-05-08 00:41:37.115 [INFO][6140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" HandleID="k8s-pod-network.0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" Workload="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" May 8 00:41:37.140916 env[1314]: 2025-05-08 00:41:37.115 [INFO][6140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:37.140916 env[1314]: 2025-05-08 00:41:37.115 [INFO][6140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:37.140916 env[1314]: 2025-05-08 00:41:37.134 [WARNING][6140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" HandleID="k8s-pod-network.0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" Workload="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" May 8 00:41:37.140916 env[1314]: 2025-05-08 00:41:37.134 [INFO][6140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" HandleID="k8s-pod-network.0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" Workload="localhost-k8s-coredns--7db6d8ff4d--xrfkq-eth0" May 8 00:41:37.140916 env[1314]: 2025-05-08 00:41:37.137 [INFO][6140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:37.140916 env[1314]: 2025-05-08 00:41:37.139 [INFO][6131] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a" May 8 00:41:37.141585 env[1314]: time="2025-05-08T00:41:37.141546578Z" level=info msg="TearDown network for sandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\" successfully" May 8 00:41:37.234245 env[1314]: time="2025-05-08T00:41:37.234191028Z" level=info msg="RemovePodSandbox \"0e756429967be6ab9850b369064960c94c44423ea56a24ab865c69df2d52cf5a\" returns successfully" May 8 00:41:37.234896 env[1314]: time="2025-05-08T00:41:37.234861399Z" level=info msg="StopPodSandbox for \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\"" May 8 00:41:37.342608 env[1314]: 2025-05-08 00:41:37.306 [WARNING][6164] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0", GenerateName:"calico-apiserver-655fb5665b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d7ae0688-a473-448c-b8b9-7f2261bb0d9a", ResourceVersion:"1246", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655fb5665b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139", Pod:"calico-apiserver-655fb5665b-b5526", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic90e0b5b7d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:37.342608 env[1314]: 2025-05-08 00:41:37.306 [INFO][6164] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:41:37.342608 env[1314]: 2025-05-08 00:41:37.306 [INFO][6164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" iface="eth0" netns="" May 8 00:41:37.342608 env[1314]: 2025-05-08 00:41:37.306 [INFO][6164] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:41:37.342608 env[1314]: 2025-05-08 00:41:37.306 [INFO][6164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:41:37.342608 env[1314]: 2025-05-08 00:41:37.332 [INFO][6172] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" HandleID="k8s-pod-network.9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" Workload="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" May 8 00:41:37.342608 env[1314]: 2025-05-08 00:41:37.332 [INFO][6172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:37.342608 env[1314]: 2025-05-08 00:41:37.332 [INFO][6172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:37.342608 env[1314]: 2025-05-08 00:41:37.338 [WARNING][6172] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" HandleID="k8s-pod-network.9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" Workload="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" May 8 00:41:37.342608 env[1314]: 2025-05-08 00:41:37.338 [INFO][6172] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" HandleID="k8s-pod-network.9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" Workload="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" May 8 00:41:37.342608 env[1314]: 2025-05-08 00:41:37.339 [INFO][6172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:37.342608 env[1314]: 2025-05-08 00:41:37.341 [INFO][6164] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:41:37.343146 env[1314]: time="2025-05-08T00:41:37.342639692Z" level=info msg="TearDown network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\" successfully" May 8 00:41:37.343146 env[1314]: time="2025-05-08T00:41:37.342675450Z" level=info msg="StopPodSandbox for \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\" returns successfully" May 8 00:41:37.343317 env[1314]: time="2025-05-08T00:41:37.343274997Z" level=info msg="RemovePodSandbox for \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\"" May 8 00:41:37.343369 env[1314]: time="2025-05-08T00:41:37.343328388Z" level=info msg="Forcibly stopping sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\"" May 8 00:41:37.418946 env[1314]: 2025-05-08 00:41:37.383 [WARNING][6195] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0", GenerateName:"calico-apiserver-655fb5665b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d7ae0688-a473-448c-b8b9-7f2261bb0d9a", ResourceVersion:"1246", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655fb5665b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7668fcff1aea23f28a6a9cfb91687b46cd39ea1b8db1cb00a600a87f4754139", Pod:"calico-apiserver-655fb5665b-b5526", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic90e0b5b7d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:37.418946 env[1314]: 2025-05-08 00:41:37.384 [INFO][6195] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:41:37.418946 env[1314]: 2025-05-08 00:41:37.384 [INFO][6195] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" iface="eth0" netns="" May 8 00:41:37.418946 env[1314]: 2025-05-08 00:41:37.384 [INFO][6195] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:41:37.418946 env[1314]: 2025-05-08 00:41:37.384 [INFO][6195] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:41:37.418946 env[1314]: 2025-05-08 00:41:37.407 [INFO][6204] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" HandleID="k8s-pod-network.9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" Workload="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" May 8 00:41:37.418946 env[1314]: 2025-05-08 00:41:37.407 [INFO][6204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:37.418946 env[1314]: 2025-05-08 00:41:37.408 [INFO][6204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:37.418946 env[1314]: 2025-05-08 00:41:37.414 [WARNING][6204] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" HandleID="k8s-pod-network.9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" Workload="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" May 8 00:41:37.418946 env[1314]: 2025-05-08 00:41:37.414 [INFO][6204] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" HandleID="k8s-pod-network.9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" Workload="localhost-k8s-calico--apiserver--655fb5665b--b5526-eth0" May 8 00:41:37.418946 env[1314]: 2025-05-08 00:41:37.415 [INFO][6204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:37.418946 env[1314]: 2025-05-08 00:41:37.417 [INFO][6195] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195" May 8 00:41:37.419443 env[1314]: time="2025-05-08T00:41:37.418981943Z" level=info msg="TearDown network for sandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\" successfully" May 8 00:41:37.557407 env[1314]: time="2025-05-08T00:41:37.557196383Z" level=info msg="RemovePodSandbox \"9c0b48a0dc69e1c2ef0bdd9dc273a77ee56d82a4c7a63f06f94b87ec39399195\" returns successfully" May 8 00:41:37.557903 env[1314]: time="2025-05-08T00:41:37.557866773Z" level=info msg="StopPodSandbox for \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\"" May 8 00:41:37.646455 env[1314]: 2025-05-08 00:41:37.612 [WARNING][6226] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0", GenerateName:"calico-kube-controllers-575f4bf5b7-", Namespace:"calico-system", SelfLink:"", UID:"5444d20d-8a4f-4e35-a777-fef99f439552", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"575f4bf5b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36", Pod:"calico-kube-controllers-575f4bf5b7-jhlnt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali68e86e77a49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:37.646455 env[1314]: 2025-05-08 00:41:37.612 [INFO][6226] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" May 8 00:41:37.646455 env[1314]: 2025-05-08 00:41:37.612 [INFO][6226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" iface="eth0" netns="" May 8 00:41:37.646455 env[1314]: 2025-05-08 00:41:37.612 [INFO][6226] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" May 8 00:41:37.646455 env[1314]: 2025-05-08 00:41:37.612 [INFO][6226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" May 8 00:41:37.646455 env[1314]: 2025-05-08 00:41:37.635 [INFO][6235] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" HandleID="k8s-pod-network.a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" Workload="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" May 8 00:41:37.646455 env[1314]: 2025-05-08 00:41:37.635 [INFO][6235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:37.646455 env[1314]: 2025-05-08 00:41:37.635 [INFO][6235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:37.646455 env[1314]: 2025-05-08 00:41:37.640 [WARNING][6235] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" HandleID="k8s-pod-network.a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" Workload="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" May 8 00:41:37.646455 env[1314]: 2025-05-08 00:41:37.640 [INFO][6235] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" HandleID="k8s-pod-network.a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" Workload="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" May 8 00:41:37.646455 env[1314]: 2025-05-08 00:41:37.643 [INFO][6235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:37.646455 env[1314]: 2025-05-08 00:41:37.644 [INFO][6226] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" May 8 00:41:37.647024 env[1314]: time="2025-05-08T00:41:37.646479419Z" level=info msg="TearDown network for sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\" successfully" May 8 00:41:37.647024 env[1314]: time="2025-05-08T00:41:37.646636297Z" level=info msg="StopPodSandbox for \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\" returns successfully" May 8 00:41:37.647250 env[1314]: time="2025-05-08T00:41:37.647179036Z" level=info msg="RemovePodSandbox for \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\"" May 8 00:41:37.647250 env[1314]: time="2025-05-08T00:41:37.647220885Z" level=info msg="Forcibly stopping sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\"" May 8 00:41:37.781157 env[1314]: 2025-05-08 00:41:37.683 [WARNING][6258] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0", GenerateName:"calico-kube-controllers-575f4bf5b7-", Namespace:"calico-system", SelfLink:"", UID:"5444d20d-8a4f-4e35-a777-fef99f439552", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"575f4bf5b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"528af442213053180e137cbd401f314334210e322b6841554d2d299903264b36", Pod:"calico-kube-controllers-575f4bf5b7-jhlnt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali68e86e77a49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:37.781157 env[1314]: 2025-05-08 00:41:37.683 [INFO][6258] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" May 8 00:41:37.781157 env[1314]: 2025-05-08 00:41:37.683 [INFO][6258] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" iface="eth0" netns="" May 8 00:41:37.781157 env[1314]: 2025-05-08 00:41:37.683 [INFO][6258] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" May 8 00:41:37.781157 env[1314]: 2025-05-08 00:41:37.683 [INFO][6258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" May 8 00:41:37.781157 env[1314]: 2025-05-08 00:41:37.703 [INFO][6266] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" HandleID="k8s-pod-network.a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" Workload="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" May 8 00:41:37.781157 env[1314]: 2025-05-08 00:41:37.703 [INFO][6266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:37.781157 env[1314]: 2025-05-08 00:41:37.703 [INFO][6266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:37.781157 env[1314]: 2025-05-08 00:41:37.775 [WARNING][6266] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" HandleID="k8s-pod-network.a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" Workload="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" May 8 00:41:37.781157 env[1314]: 2025-05-08 00:41:37.775 [INFO][6266] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" HandleID="k8s-pod-network.a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" Workload="localhost-k8s-calico--kube--controllers--575f4bf5b7--jhlnt-eth0" May 8 00:41:37.781157 env[1314]: 2025-05-08 00:41:37.778 [INFO][6266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:37.781157 env[1314]: 2025-05-08 00:41:37.779 [INFO][6258] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285" May 8 00:41:37.782482 env[1314]: time="2025-05-08T00:41:37.781190636Z" level=info msg="TearDown network for sandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\" successfully" May 8 00:41:37.822565 env[1314]: time="2025-05-08T00:41:37.822283867Z" level=info msg="RemovePodSandbox \"a05f0a570ebf24a8cf42510b42037fdaf6fc5abbcf48f1ea5090ebceb482e285\" returns successfully" May 8 00:41:41.935982 kernel: kauditd_printk_skb: 1 callbacks suppressed May 8 00:41:41.936144 kernel: audit: type=1130 audit(1746664901.930:613): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.16:22-10.0.0.1:53636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:41.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.16:22-10.0.0.1:53636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:41.930916 systemd[1]: Started sshd@31-10.0.0.16:22-10.0.0.1:53636.service. May 8 00:41:41.962000 audit[6274]: USER_ACCT pid=6274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:41.963772 sshd[6274]: Accepted publickey for core from 10.0.0.1 port 53636 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:41:41.967000 audit[6274]: CRED_ACQ pid=6274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:41.968664 sshd[6274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:41:41.972437 kernel: audit: type=1101 audit(1746664901.962:614): pid=6274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:41.972550 kernel: audit: type=1103 audit(1746664901.967:615): pid=6274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:41.972595 kernel: audit: type=1006 audit(1746664901.967:616): pid=6274 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 May 8 00:41:41.975126 kernel: audit: type=1300 audit(1746664901.967:616): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9e509d90 a2=3 a3=0 items=0 ppid=1 pid=6274 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:41.967000 audit[6274]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9e509d90 a2=3 a3=0 items=0 ppid=1 pid=6274 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:41.974733 systemd-logind[1294]: New session 32 of user core. May 8 00:41:41.975262 systemd[1]: Started session-32.scope. May 8 00:41:41.981766 kernel: audit: type=1327 audit(1746664901.967:616): proctitle=737368643A20636F7265205B707269765D May 8 00:41:41.967000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:41:41.981000 audit[6274]: USER_START pid=6274 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:41.983000 audit[6277]: CRED_ACQ pid=6277 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:41.991606 kernel: audit: type=1105 audit(1746664901.981:617): pid=6274 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:41.991719 kernel: audit: type=1103 audit(1746664901.983:618): pid=6277 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:42.093355 sshd[6274]: pam_unix(sshd:session): session closed for user core May 8 00:41:42.093000 audit[6274]: USER_END pid=6274 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:42.096399 systemd[1]: sshd@31-10.0.0.16:22-10.0.0.1:53636.service: Deactivated successfully. May 8 00:41:42.098117 systemd[1]: session-32.scope: Deactivated successfully. May 8 00:41:42.098148 systemd-logind[1294]: Session 32 logged out. Waiting for processes to exit. May 8 00:41:42.099378 systemd-logind[1294]: Removed session 32. May 8 00:41:42.093000 audit[6274]: CRED_DISP pid=6274 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:42.104028 kernel: audit: type=1106 audit(1746664902.093:619): pid=6274 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:42.104088 kernel: audit: type=1104 audit(1746664902.093:620): pid=6274 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:42.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.16:22-10.0.0.1:53636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:44.675953 systemd[1]: run-containerd-runc-k8s.io-cb95fdc351c2593391d5a8dece22b799705043909f36dc3c66401d3042657ddb-runc.LwRXGb.mount: Deactivated successfully. May 8 00:41:47.097330 systemd[1]: Started sshd@32-10.0.0.16:22-10.0.0.1:47180.service. May 8 00:41:47.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.16:22-10.0.0.1:47180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:47.117008 kernel: kauditd_printk_skb: 1 callbacks suppressed May 8 00:41:47.117120 kernel: audit: type=1130 audit(1746664907.096:622): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.16:22-10.0.0.1:47180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:41:47.143000 audit[6313]: USER_ACCT pid=6313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:47.144674 sshd[6313]: Accepted publickey for core from 10.0.0.1 port 47180 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:41:47.146999 sshd[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:41:47.151986 systemd[1]: Started session-33.scope. May 8 00:41:47.152871 systemd-logind[1294]: New session 33 of user core. May 8 00:41:47.145000 audit[6313]: CRED_ACQ pid=6313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:47.202666 kernel: audit: type=1101 audit(1746664907.143:623): pid=6313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:47.202736 kernel: audit: type=1103 audit(1746664907.145:624): pid=6313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:47.202771 kernel: audit: type=1006 audit(1746664907.145:625): pid=6313 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 May 8 00:41:47.145000 audit[6313]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7c58dab0 a2=3 a3=0 items=0 ppid=1 pid=6313 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:47.209671 kernel: audit: type=1300 audit(1746664907.145:625): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7c58dab0 a2=3 a3=0 items=0 ppid=1 pid=6313 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:41:47.145000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 8 00:41:47.165000 audit[6313]: USER_START pid=6313 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:47.215963 kernel: audit: type=1327 audit(1746664907.145:625): proctitle=737368643A20636F7265205B707269765D May 8 00:41:47.216025 kernel: audit: type=1105 audit(1746664907.165:626): pid=6313 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:47.216063 kernel: audit: type=1103 audit(1746664907.198:627): pid=6316 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:47.198000 audit[6316]: CRED_ACQ pid=6316 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:47.302940 sshd[6313]: pam_unix(sshd:session): session closed for user core May 8 00:41:47.303000 audit[6313]: USER_END pid=6313 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:47.305510 systemd[1]: sshd@32-10.0.0.16:22-10.0.0.1:47180.service: Deactivated successfully. May 8 00:41:47.307088 systemd[1]: session-33.scope: Deactivated successfully. May 8 00:41:47.307768 systemd-logind[1294]: Session 33 logged out. Waiting for processes to exit. May 8 00:41:47.308893 systemd-logind[1294]: Removed session 33. May 8 00:41:47.303000 audit[6313]: CRED_DISP pid=6313 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:47.312652 kernel: audit: type=1106 audit(1746664907.303:628): pid=6313 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:47.312745 kernel: audit: type=1104 audit(1746664907.303:629): pid=6313 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 8 00:41:47.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.16:22-10.0.0.1:47180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'